agent-teams
by alinaqiagent-teams is a Claude Code workflow skill for multi-agent feature delivery with a strict TDD pipeline. It coordinates spec writing, review, failing tests, implementation, security checks, and PR orchestration for teams using claude-bootstrap. Install it when you need repeatable handoffs, quality gates, and less agent drift on feature branches.
This skill scores 79/100, which means it is worth listing for users who want a strongly opinionated team-based workflow with clear TDD enforcement. The repository gives enough operational detail to help an agent trigger and follow the process with less guesswork than a generic prompt, though it still has some adoption friction because the setup is repository-specific and lacks a quick install/run command.
- Clear trigger and purpose: the frontmatter says it is for spawning agent teams for parallel feature development with a strict TDD pipeline.
- Operational workflow is concrete: multiple agent files define specific roles and step-by-step protocols for feature, quality, review, security, and merge coordination.
- Strong agent leverage: the skill encodes task dependencies, blocking rules, and verification stages, which gives an agent more structure than a free-form prompt.
- No install command or support files are provided, so users may need to adapt the setup manually before they can use it.
- The skill is highly opinionated and assumes a claude-bootstrap agent-team environment, which limits portability outside that workflow.
Overview of agent-teams skill
What agent-teams is for
agent-teams is a Claude Code workflow skill for projects that want multi-agent development with a strict TDD pipeline. Use the agent-teams skill when you need coordinated feature delivery, not just a one-off prompt: a spec writer, quality gate, implementation agent, review, security scan, and PR/branch orchestration all work as a team.
Who should install it
This fits teams using alinaqi/claude-bootstrap who want repeatable agent roles and enforced handoffs. It is most useful when you care about test-first execution, blocked-on-quality checks, and reducing “agent drift” across a feature branch.
What makes it different
The main differentiator is the immutable feature pipeline: spec, review, failing tests, red verification, implementation, green verification, code review, security scan, and PR creation. The agent-teams for Multi-Agent Systems pattern is opinionated and process-heavy by design, which is helpful if you want consistency and traceability more than flexibility.
How to Use agent-teams skill
Install and locate the skill files
Use the agent-teams install flow from your Claude Code skill manager, then inspect skills/agent-teams/SKILL.md first. This repository does not rely on extra rules/, resources/, or scripts/ helpers, so the agent definitions under skills/agent-teams/agents/ are the important supporting files.
What to read first
Start with SKILL.md, then review:
agents/feature.mdagents/quality.mdagents/code-review.mdagents/security.mdagents/merger.mdagents/team-lead.md
Those files show how the team is expected to behave, which tools each role can use, and where the blocking checks happen. That matters more than a quick skim because agent-teams usage depends on role boundaries, not just prompt wording.
How to prompt it well
The best input is a feature goal with scope, repo context, and constraints. For example, instead of “add auth,” give:
- target files or subsystem
- acceptance criteria
- test framework
- performance/security constraints
- anything the agent must not change
A strong agent-teams guide prompt should tell the team what “done” means. If you do not specify behavior precisely, the quality agent will still gate the flow, but the feature agent may write narrow tests or miss edge cases.
When it works best
Use it for features that benefit from parallelized planning, test-first implementation, review, and security checks. It is less useful for tiny fixes, exploratory prototypes, or highly ambiguous tasks where the overhead of a full team would slow you down.
agent-teams skill FAQ
Is this beginner-friendly?
Yes, if you are comfortable reading agent files and test output. The workflow is strict, so beginners get structure, but they still need to provide a clear goal and understand that failures are part of the process.
How is this different from a normal prompt?
A normal prompt asks one model to do everything. agent-teams separates responsibilities across agents and blocks progress until each gate is satisfied. That usually improves reliability for multi-step work, but it also adds ceremony.
Does it work outside claude-bootstrap?
Not as a drop-in guarantee. The skill is designed around the claude-bootstrap agent layout, .claude/agents/ frontmatter, and the task chain described in SKILL.md. Outside that ecosystem, you may need to adapt file paths and orchestration conventions.
When should I not use agent-teams?
Skip it for one-file edits, urgent hotfixes, or tasks where the repo does not have a meaningful test suite. If you cannot support TDD, review, and security gates, the workflow will feel heavier than a plain prompt.
How to Improve agent-teams skill
Give the team better inputs
The biggest quality jump comes from sharper acceptance criteria. Include expected inputs, outputs, edge cases, and any existing conventions the feature must follow. That helps the feature agent write tests that match your real intent instead of guessing.
Reduce failure points in the pipeline
Common problems are vague specs, missing test commands, and unclear ownership of files. If you know the project’s test runner, lint command, and package manager, state them up front. If a feature touches shared code, say so explicitly to avoid cross-agent conflicts.
Iterate after the first run
Use the first pass to expose gaps, then refine the spec or constraints before asking for a second pass. For agent-teams, improvement usually means better task boundaries, stronger negative cases, and clearer definitions of what the quality and security agents should block on.
Tune for your repo
If your repository has unusual architecture or nonstandard test patterns, call that out in the prompt and in the linked agent files. The more your inputs reflect the repo’s real constraints, the less the team will drift into generic TDD behavior and the better the agent-teams install will perform in practice.
