subagent-driven-development
by obraOrchestrate development work by dispatching fresh, specialized subagents per task with separate spec and code-quality review inside a single session.
Overview
What is subagent-driven-development?
subagent-driven-development is an agent orchestration skill for running an implementation plan as a sequence of independent tasks, each handled by a fresh subagent. For every task, you:
- Spin up a dedicated implementer subagent.
- Run a spec compliance reviewer subagent.
- Run a code quality reviewer subagent.
All three are kept in tightly controlled context so they focus only on the current task, while your main session stays free for coordination and decision-making.
Who is this skill for?
subagent-driven-development is designed for developers and teams who:
- Use AI coding assistants (such as Claude / claude-code) and want more reliable results.
- Work from a written implementation plan broken into discrete tasks.
- Need a structured, repeatable way to get code implemented and reviewed in a single AI session.
- Care about both spec correctness and code quality, not just "something that works".
It fits especially well in GitHub-centric workflows where you can pass SHAs, plan files, and diffs into subagents.
What problems does it solve?
This skill addresses common issues when using a single AI agent for end‑to‑end development:
- Context bloat: One agent accumulates too much history and loses focus.
- Spec drift: Implementations gradually diverge from the original plan or requirements.
- Weak reviews: The same context that wrote the code tries to review it, missing mistakes.
subagent-driven-development enforces a pattern: fresh agent per task, strict context, and a two‑stage review (spec then quality). This improves correctness, keeps changes scoped, and makes it easier to reason about each step of your implementation plan.
When is subagent-driven-development a good fit?
Use this skill when:
- You already have an implementation plan broken down into tasks.
- The tasks are mostly independent – they do not require constant cross‑task coordination.
- You intend to complete the plan in the current session, not spread it across days.
If you don’t yet have a plan, or tasks are tightly coupled and evolving rapidly, you may want to:
- Brainstorm or design the plan first with other skills or manual planning.
- Use a more freeform, single‑agent workflow for exploratory work.
How to Use
Installation
1. Add the skill to your environment
Install the subagent-driven-development skill from the obra/superpowers repository:
npx skills add https://github.com/obra/superpowers --skill subagent-driven-development
This pulls the skill definition and supporting prompt templates into your skills-enabled environment so you can orchestrate subagents for each task in your plan.
2. Inspect the core files
After installation, open the skill directory in the repository (or via your skills browser) and review:
SKILL.md– high-level description, when to use, and core workflow.implementer-prompt.md– template for your implementer subagent.spec-reviewer-prompt.md– template for your spec compliance reviewer subagent.code-quality-reviewer-prompt.md– template for your code quality reviewer subagent.
Treat these as templates to copy or adapt into your own automation or tool wiring.
Preparing your implementation plan
1. Write or refine your task list
Before you use subagent-driven-development, prepare an implementation plan with tasks that are:
- Clearly scoped and testable.
- Mostly independent from one another.
- Expressed in enough detail that an implementer subagent can act without guessing.
Each task should be copy‑and‑pastable into the implementer prompt as “FULL TEXT of task from plan”.
2. Decide your working directory and Git strategy
The prompt templates assume a Git-based workflow and a concrete working directory:
- Choose a
directorywhere the implementer will work. - Decide how you’ll track changes (e.g.,
BASE_SHAandHEAD_SHAfor each task).
You’ll pass these values into the spec and code quality reviewer prompts for accurate reviews.
Running the workflow per task
1. Dispatch an implementer subagent
For each task N, create a new implementer subagent using the template in implementer-prompt.md.
Key points from the template:
- The implementer is told explicitly: “You are implementing Task N: [task name]”.
- You paste the full text of the task into the
## Task Descriptionsection. - You fill in:
Context– where this fits in your system.directory– where to make changes.
The implementer is instructed to:
- Ask clarification questions before starting if anything is unclear.
- Implement exactly what the task specifies.
- Write and run tests when appropriate.
- Verify the implementation.
- Commit the work.
- Produce a clear report of what was done.
Because you create a fresh subagent for each task, it only sees the context you provide and doesn’t inherit unrelated history from your main session.
2. Run a spec compliance review
Once the implementer finishes and reports back, dispatch a spec compliance reviewer subagent using spec-reviewer-prompt.md.
In this template you:
- Paste the task requirements into
## What Was Requested. - Paste the implementer’s report into
## What Implementer Claims They Built.
The spec reviewer is explicitly instructed to not trust the implementer’s report and must:
- Read the actual code.
- Compare it line‑by‑line against the requirements.
- Identify missing requirements, extra/unwanted work, and misunderstandings.
If the spec reviewer finds issues, you loop back with the implementer (either the same worker or a new subagent) to address the gaps before moving on.
3. Run a code quality review
After spec compliance passes, dispatch a code quality reviewer subagent using code-quality-reviewer-prompt.md.
The template expects a code-review style task description, for example:
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from implementer's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
The reviewer checks:
- Cleanliness and maintainability of the implementation.
- File responsibilities and interfaces (one clear responsibility per file where possible).
- Whether new or changed files are appropriately sized and decomposed.
- Test coverage and ability to understand and test units independently.
They return structured feedback: strengths, issues (Critical / Important / Minor), and an overall assessment.
You can then decide whether to:
- Accept the change as-is.
- Ask the implementer to perform follow‑up refactors.
Adapting the workflow to your environment
1. Customize prompts for your stack
The templates in implementer-prompt.md, spec-reviewer-prompt.md, and code-quality-reviewer-prompt.md are intentionally generic. Adapt them to your:
- Programming languages and frameworks.
- Testing conventions (e.g.,
pytest, Jest, Go test). - Repository layout and naming.
Keep the core structure—fresh subagent, clear sections, explicit job description—even when customizing the details.
2. Automate repeated steps
Once you’re comfortable with the pattern, you can script or tool it:
- Wrap the three subagent calls (implementer → spec reviewer → code quality reviewer) into a single command per task.
- Generate task-specific prompts automatically from a plan file.
- Pre-fill
BASE_SHAandHEAD_SHAby reading Git metadata.
This turns subagent-driven-development into a repeatable workflow automation for your team.
3. When this skill is not a good fit
You may want a different approach when:
- Tasks are highly interdependent and can’t be cleanly isolated.
- You don’t have a clear implementation plan yet.
- You need long‑running, cross‑session work where context must persist for days.
In those cases, use skills or processes focused on planning, architecture, or long‑term agents, then return to subagent-driven-development when you’re ready to execute discrete tasks.
FAQ
What does "subagent-driven-development" mean in practice?
In practice, subagent-driven-development means you don’t ask one all‑knowing agent to plan, code, and review everything. Instead, you:
- Maintain coordination and overall context in your main session.
- For each task, construct a fresh subagent with only the information it needs.
- Run that subagent to implement the task, then run two more subagents to review it.
This isolates concerns, keeps context manageable, and improves reliability for each step of your implementation plan.
How is this different from a normal single-agent coding session?
With a single agent, past conversation and code edits accumulate in one context, which can lead to:
- Confusion between old and new requirements.
- The same reasoning patterns being reused for both coding and reviewing.
subagent-driven-development instead:
- Uses separate prompts and roles for implementation and review.
- Starts each subagent with curated context, not the entire session history.
- Enforces a spec‑first, then quality review ordering.
This tends to produce more precise implementations and more honest reviews.
Do I have to follow the templates exactly?
No. The templates in the repository are examples of how to structure implementer, spec reviewer, and code quality reviewer prompts. You are expected to:
- Keep the overall pattern: implementer → spec review → quality review.
- Preserve key behaviors (e.g., spec reviewer must read the real code and not trust the report).
Within that structure, you can adjust wording, add project‑specific guidance, and integrate with your tools and conventions.
Can I use subagent-driven-development without Git?
The code quality reviewer template assumes fields like BASE_SHA and HEAD_SHA, which are natural in a Git workflow. If you don’t use Git:
- You can still apply the same core ideas—fresh subagents and two‑stage review.
- Replace SHAs with your own way of referencing the before/after state (for example, archive identifiers or snapshot paths).
The skill does not enforce Git itself; it provides Git‑oriented examples.
Does this skill depend on a specific AI model?
The repository does not hard‑lock you to a particular model, but it clearly targets modern, general‑purpose coding models such as Anthropics’ Claude / claude-code. You should:
- Use a model that can read and reason about code and tests.
- Ensure your environment supports spawning multiple subagents with custom prompts.
If your stack supports agent tools or task runners, you can wire these templates into that system.
How do I know when to use subagent-driven-development vs. another superpowers skill?
The SKILL.md file describes decision criteria: use subagent-driven-development when you have an implementation plan, your tasks are mostly independent, and you plan to stay in the current session. If any of those are not true, you may:
- Use planning or brainstorming skills to create the plan first.
- Use other executing or planning patterns for tightly coupled, cross‑session work.
Where should I start in the repository?
If you’re evaluating whether to install or adopt this skill, start with:
SKILL.md– for the rationale and high‑level workflow.implementer-prompt.md– to see how implementer subagents are framed.spec-reviewer-prompt.md– to understand the spec compliance checks.code-quality-reviewer-prompt.md– to see the additional quality and structure checks.
From there, you can adapt these templates into your own agent orchestration or workflow automation setup and fully leverage subagent-driven-development.
