A

continuous-learning-v2

by affaan-m

continuous-learning-v2 turns Claude Code sessions into project-scoped learning with hooks, observer agents, confidence scoring, and promotion of repeated patterns into skills, commands, or agents.

Stars156.1k
Favorites0
Comments0
AddedApr 15, 2026
CategorySkill Authoring
Install Command
npx skills add affaan-m/everything-claude-code --skill continuous-learning-v2
Curation Score

This skill scores 78/100, which means it is a solid listing candidate: directory users get a real, reusable workflow for session observation, instinct creation, and project-scoped learning, though they should expect some setup complexity and rely on the repository docs for correct activation. The repository evidence shows a substantial, non-placeholder skill with clear operational hooks and scripts, making it worth installing for users who want Claude Code to learn from sessions instead of using generic prompts.

78/100
Strengths
  • Explicit activation paths for session observation, scheduled runs, SIGUSR1 triggering, and project-scoped learning make it actionable.
  • Substantial workflow content with scripts for observer launch, session guarding, project detection, and hook-based observation.
  • v2.1 adds project-scoped instincts and a promotion path to global scope, reducing cross-project contamination and improving reuse.
Cautions
  • No install command is present in SKILL.md, so users may need to assemble hook/agent wiring manually.
  • The observer is disabled by default in config.json, so value depends on additional setup and enabling the background workflow.
Overview

Overview of continuous-learning-v2 skill

What continuous-learning-v2 does

The continuous-learning-v2 skill turns Claude Code sessions into a learning loop: it watches tool activity, extracts atomic “instincts,” scores them by confidence, and can promote useful patterns into skills, commands, or agents. If you need the continuous-learning-v2 guide for persistent, project-aware memory rather than one-off prompting, this is the right fit.

Who it is for

Use continuous-learning-v2 for Skill Authoring when you want an AI workflow to remember repeated behaviors across sessions, especially in repos with stable conventions. It is strongest for agents, hook-driven automation, and teams that want project-specific learning without leaking patterns between codebases.

Why v2 matters

The main differentiator is project-scoped storage: React habits stay in a React repo, Python habits stay in Python, and only broadly useful patterns become global. That makes continuous-learning-v2 less noisy than a generic “learn from my sessions” prompt and more suitable for real multi-project use.

How to Use continuous-learning-v2 skill

Install and activate it

Use the continuous-learning-v2 install path by adding the skill from the repo:
npx skills add affaan-m/everything-claude-code --skill continuous-learning-v2
After install, verify the hook and observer pieces are enabled in your Claude Code setup; the repository’s hooks/ and agents/ folders are the practical entry points, not just documentation.

Start from the right files

Read SKILL.md first, then config.json, scripts/detect-project.sh, hooks/observe.sh, and agents/start-observer.sh. If you want the most important execution path, preview agents/observer-loop.sh and agents/session-guardian.sh next; those show when analysis runs, what gets throttled, and how project context is resolved.

Give it better input than a vague goal

A strong continuous-learning-v2 usage prompt says what should be observed, what counts as a useful pattern, and whether the learning should stay project-local. For example: “Track how I handle TypeScript errors in this repo, keep conventions project-scoped, and only promote patterns used in two or more files.” That is much better than “learn my coding style.”

Workflow that produces usable instincts

Run normal Claude Code sessions, let the hook capture tool events, then let the observer analyze accumulated observations on a schedule or on demand. Review the output for false positives, then refine thresholds and scope rules before expecting reliable promotion into commands or agents.

continuous-learning-v2 skill FAQ

Is continuous-learning-v2 beginner-friendly?

Yes, if you are comfortable installing a hook-based workflow and reading a few shell scripts. It is not a no-code feature: the skill is easier to use when you can inspect SKILL.md, understand project detection, and accept that some tuning may be needed.

How is this different from a plain prompt?

A plain prompt can imitate learning once, but continuous-learning-v2 is built to observe, store, score, and reuse behavior over time. That makes it better when you want repeatable memory, confidence thresholds, and project boundaries instead of a single response.

When should I not use it?

Skip continuous-learning-v2 if you only need a one-time answer, if your environment cannot run hooks reliably, or if you do not want local session data stored for analysis. It is also a poor fit for workflows where every project should share the same habits.

Does it fit the Claude Code ecosystem?

Yes. The repository is organized around Claude Code hooks, background agents, and project-scoped storage under ~/.claude/homunculus/. If your setup does not allow those integration points, the skill’s value drops sharply.

How to Improve continuous-learning-v2 skill

Feed it cleaner examples

The best continuous-learning-v2 results come from sessions with clear, repeated decisions: naming, validation, test runs, refactors, or repo-specific conventions. If your input is vague or mixed with unrelated experimentation, the learned instincts will be noisier and less promotable.

Tune scope before tuning volume

If patterns are leaking across repos, fix project detection first by checking scripts/detect-project.sh and the project-scoped storage layout. For continuous-learning-v2 for Skill Authoring, scope quality matters more than collecting more observations.

Use thresholds and promotion rules deliberately

The skill is strongest when you decide what “good enough” means before promotion. Set expectations for confidence, frequency, and project repetition so the system does not elevate one-off behaviors into permanent instructions.

Iterate after the first analysis

Treat the first output as a draft instinct library, not a final policy set. Review what was extracted, remove generic or accidental patterns, then rerun with sharper prompts like: “Only keep behaviors that were corrected by me or repeated across at least two sessions.”

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...