subagent-driven-development
by obrasubagent-driven-development is a skill for executing implementation plans with a fresh subagent per task, then reviewing each result in two passes: spec compliance first, code quality second. It includes prompt templates for the implementer, spec reviewer, and code quality reviewer.
This skill scores 79/100, which means it is a solid directory listing candidate for users who want a disciplined subagent execution pattern rather than a loose prompt recipe. Directory users can reasonably expect a real, reusable workflow with clear delegation and review structure, but they should also expect some manual orchestration and a few unresolved dependency/details before adopting it wholesale.
- Strong triggerability: `SKILL.md` clearly says to use it when executing an implementation plan with mostly independent tasks in the current session, and includes a when-to-use decision flow.
- Good agent leverage: the repository includes concrete prompt templates for implementer, spec reviewer, and code quality reviewer, which should reduce guesswork versus a generic delegation prompt.
- Operationally credible review loop: it enforces spec-compliance review before code-quality review and explicitly tells reviewers to verify code independently rather than trust implementer reports.
- Workflow has coordination overhead: it expects a fresh subagent per task plus two review passes, with the operator pasting full task text and reports into prompts.
- Some execution details are implicit rather than self-contained, including references to `superpowers:code-reviewer` and `requesting-code-review/code-reviewer.md`, and there is no install command in `SKILL.md`.
Overview of subagent-driven-development skill
What subagent-driven-development actually does
The subagent-driven-development skill is a workflow for executing an implementation plan by splitting work into independent tasks, handing each task to a fresh subagent, and reviewing every result in two passes: spec compliance first, code quality second. Its real value is not “use more agents,” but using isolated context on purpose so each worker gets only the task, requirements, and local code context it needs.
Best fit for this skill
This subagent-driven-development skill is best for people who already have a plan and need to turn it into reliable implementation inside the current session. It fits especially well when:
- tasks are mostly independent
- you want the coordinator agent to stay focused on orchestration
- you care about catching both requirement drift and sloppy code
- you want a repeatable review loop, not just a one-shot coding prompt
The job to be done
Users adopt subagent-driven-development for Agent Orchestration when a normal “implement this plan” prompt starts failing in predictable ways: the agent mixes tasks together, forgets constraints, overbuilds, or produces code that looks plausible but misses the spec. This skill gives you a disciplined handoff-and-review pattern that reduces those failures.
What makes it different from a generic prompt
The key differentiators are practical:
- Fresh subagent per task instead of one long-running agent carrying noisy history
- Explicit task packets with the full task text pasted in, rather than asking a worker to infer requirements from scattered files
- Mandatory questions before coding so unclear requirements surface early
- Two-stage review where “did it match the spec?” is separated from “is the code good?”
That separation matters. Many teams review quality before verifying scope, which makes overbuilt or underbuilt work harder to spot.
When it is the wrong choice
Do not start with subagent-driven-development if you do not yet have a concrete implementation plan, if tasks are tightly coupled, or if the work belongs in a separate parallel execution flow rather than this session. In those cases, planning or a different execution skill is usually a better first step.
How to Use subagent-driven-development skill
Install subagent-driven-development skill
If you install skills from this repository via the Skills CLI, use:
npx skills add https://github.com/obra/superpowers --skill subagent-driven-development
Then open the installed skill and supporting prompt templates before your first run.
Read these files first
For fast adoption, read the files in this order:
SKILL.mdimplementer-prompt.mdspec-reviewer-prompt.mdcode-quality-reviewer-prompt.md
That path tells you the workflow first, then the exact prompt shapes for the implementer and both review stages.
Understand the calling pattern before you start
In practice, subagent-driven-development usage is not a single magic command. You use it by acting as the coordinator:
- take one task from a plan
- dispatch a fresh implementer subagent with a tightly scoped prompt
- require a report back
- run a spec reviewer against the actual code
- only if spec passes, run a code quality reviewer
- accept, revise, or re-dispatch
If you skip the review gates, you are no longer really using the skill as designed.
What input the skill needs
Prepare these inputs before dispatching any subagent:
- the exact task text from your plan
- acceptance criteria or requirements
- relevant architectural context
- working directory or repo scope
- any dependency or sequencing notes
- the baseline commit or SHA for review diffs
- task number and task name for traceability
The source templates strongly imply that you should paste the full task into the prompt, not tell the subagent to “go read the plan file.”
Turn a rough goal into a strong implementer prompt
A weak prompt says:
- “Implement task 4 from the plan.”
A stronger subagent-driven-development guide prompt includes:
- Task title and number
- Full task text
- Why this task exists
- Where in the repo to work
- Constraints on file structure
- Whether tests are required
- What to do if assumptions become necessary
- A requirement to ask questions before coding
That shape matters because the skill is built around controlled context, not autonomous repo-wide interpretation.
Example of a better task packet
Use a structure like this when dispatching the implementer:
Task N: [name]FULL TEXT of task from planContext: where this fits, dependencies, architectureWork from: [directory]Requirements: implement exactly what is specifiedIf anything is unclear, ask before startingWrite tests if required by taskCommit, self-review, and report back
This is more reliable than telling a worker to explore broadly, because the skill assumes the coordinator is responsible for packaging the assignment well.
Why the spec review comes before quality review
This is one of the highest-value parts of subagent-driven-development install decisions: the order is intentional.
Run the spec reviewer first to answer:
- did the code implement what was requested?
- did it skip requirements?
- did it add unrequested work?
- did it misunderstand the task?
Only after that should you run the code quality reviewer, which checks maintainability, decomposition, file responsibility, and change shape. If you reverse the order, good-looking code can hide scope errors.
How to use the spec reviewer well
The spec-reviewer-prompt.md template is unusually direct: it tells the reviewer not to trust the implementer’s report and to verify against actual code line by line. Preserve that tone when you adapt it. The reviewer needs:
- full task requirements
- implementer’s claimed output
- access to the changed code
The point is independent verification, not polite confirmation.
How to use the code quality reviewer well
The code quality review is not generic style policing. In this skill, it emphasizes:
- one clear responsibility per file
- well-defined interfaces
- decomposition into understandable units
- alignment with the planned file structure
- whether this task created oversized new files or bloated existing ones
That last check is useful because subagents often solve the task by cramming too much into one change.
Suggested workflow inside a real repo
A practical subagent-driven-development usage loop looks like this:
- choose the next independent task
- capture the current commit as baseline
- dispatch implementer with full task text
- collect their summary and changed files
- run spec review against requirements and code
- if spec fails, return specific gaps to implementer
- if spec passes, run code quality review
- if quality fails, request focused revision
- merge or move to the next task
This keeps the coordinator in charge of sequencing and acceptance.
Constraints that affect output quality
The skill works best when you respect its boundaries:
- independent tasks outperform tangled tasks
- explicit requirements outperform inferred requirements
- narrow repo scope beats “look around and decide”
- short task loops beat large multi-feature batches
- clear escalation rules beat silent guessing
If you find yourself needing a subagent to reconcile many moving parts across the whole codebase, the task is likely too broad for this workflow.
Common adoption mistake
The biggest mistake is using the subagent-driven-development skill as a label while still writing loose prompts. The workflow only pays off if you actually package context carefully and enforce the review sequence. Without that, you get the overhead of orchestration without the quality gains.
subagent-driven-development skill FAQ
Is subagent-driven-development good for beginners?
Yes, if you already understand the task you want built. The workflow is explicit and the provided prompt templates reduce guesswork. But it is not a substitute for planning. Beginners who do not yet have a clear implementation plan may struggle because the skill assumes task definitions already exist.
When should I not use subagent-driven-development?
Skip subagent-driven-development when:
- you are still exploring the problem
- tasks are deeply interdependent
- requirements are unstable
- one human or one agent needs to reason across the whole system continuously
It is an execution pattern, not a discovery pattern.
How is this different from just asking one agent to code everything?
A single long prompt often mixes planning, implementation, validation, and review in one context window. This skill separates those roles. That usually improves focus, makes requirement drift easier to catch, and preserves the coordinator’s context for orchestration instead of code generation.
Does the skill require special tools?
No special scripts are bundled in this skill folder. The repository provides markdown prompt templates rather than automation code. You can use the pattern anywhere you can dispatch subagents and run code review tasks.
Is subagent-driven-development only for large projects?
No. It can work for small changes too, but it is most valuable when a plan has several independent tasks and the cost of missed requirements is high enough to justify review overhead.
What repository evidence matters most before installing?
For this skill, the main evidence is the workflow design in SKILL.md and the three prompt templates. There are no helper scripts or resource folders doing hidden work, so your install decision should focus on whether the prompt structure and review discipline match how you already ship code.
How to Improve subagent-driven-development skill
Give better task packets, not longer prompts
To improve subagent-driven-development skill results, increase precision rather than verbosity. The most useful additions are:
- exact acceptance criteria
- explicit non-goals
- architecture notes relevant to this task only
- file or directory boundaries
- examples of expected behavior
- test expectations
This helps the implementer stay focused and helps the spec reviewer detect drift.
Make task boundaries sharper
Many failures come from poorly sliced work. If a subagent needs to coordinate multiple moving pieces, decide architecture, and infer requirements at once, split the task. Better tasks are narrow enough that “implemented exactly what was requested” is easy to verify.
Preserve the ask-questions-first step
The implementer template explicitly tells the worker to ask questions before starting and again if surprises appear during implementation. Keep that behavior. Suppressing clarification requests creates fast but unreliable output, which defeats the point of the skill.
Improve review quality with stronger comparison inputs
For the spec reviewer, provide:
- full requirement text
- implementer report
- changed files or diff scope
- any explicit exclusions
For the code quality reviewer, provide:
BASE_SHAHEAD_SHA- task summary
- relevant plan section
Those concrete comparison anchors make reviews more than opinion.
Watch for these common failure modes
The most common subagent-driven-development for Agent Orchestration problems are:
- the implementer infers extra features
- the task packet omits a key constraint
- the reviewer trusts the implementer summary too much
- code quality review runs before spec review
- tasks are too large to verify cleanly
- file growth goes unchecked
Each one is preventable with better task packaging and stricter gatekeeping.
Iterate after the first output
If the first pass is weak, do not restart from scratch immediately. First identify which layer failed:
- spec failure: requirements unclear, missing, or overbuilt
- quality failure: decomposition, maintainability, or file structure issues
- coordination failure: task slicing or context packaging was wrong
Then revise only that layer. This keeps the workflow efficient.
Tighten file-structure guidance
One useful detail from the quality template is checking whether implementation followed the planned file structure and whether newly created files are already too large. If you care about maintainability, include intended file boundaries in the task packet up front rather than hoping reviewers catch it later.
Create a reusable local checklist
If you will use subagent-driven-development often, keep a short coordinator checklist:
- plan exists
- task is independent
- full task pasted
- constraints included
- baseline SHA captured
- implementer asked to clarify before coding
- spec review completed
- quality review completed
That small habit improves consistency more than writing longer prompts.
Improve the skill for your own workflow
The base skill is intentionally lightweight. To make it more effective in your environment, adapt the prompt templates to your stack and review standards:
- add your testing commands
- add repo-specific architecture rules
- define what counts as over-engineering
- specify your preferred report format
- include common failure patterns in your codebase
That kind of local tailoring usually improves subagent-driven-development usage more than adding more theory.
