test-driven-development
by addyosmaniThe test-driven-development skill helps you change code by writing a failing test first, then making the smallest fix pass. Use it for logic changes, bug fixes, regressions, and edge cases where proof matters more than a plausible patch.
This skill scores 84/100, which means it is a solid directory listing for users who want an agent-friendly TDD workflow with clear triggers and step-by-step guidance. It should help agents choose and execute the skill with less guesswork than a generic prompt, though it is still a single-file skill with no supporting scripts or references.
- Strong triggerability: the description explicitly targets implementing new logic, bug fixes, and behavior changes.
- Operationally clear workflow: it lays out the RED-GREEN-REFACTOR cycle and when to use or avoid it.
- Good practical depth: the body is substantial, with multiple headings, constraints, and code examples rather than placeholder text.
- No support files or install command: users only get SKILL.md, so adoption depends on reading the document closely.
- Marked as experimental/test and lacks external references, so trust rests on the content itself rather than tooling or citations.
Overview of test-driven-development skill
The test-driven-development skill helps you change code by proving behavior first, then implementing the smallest fix that makes the test pass. It is best for developers and agents working on logic changes, bug fixes, edge cases, and regressions where “looks right” is not enough. If you need the test-driven-development skill to reduce guesswork, this guide tells you when it fits and what it actually improves: safer edits, clearer requirements, and less backtracking after an initial patch.
What this skill is for
Use test-driven-development when the task changes behavior: new functions, altered conditions, bug reproduction, or anything that could silently break existing code. It is especially useful when the repo already has tests and you want the agent to work inside the project’s proof system instead of inventing behavior from scratch.
What makes it different
The key value is discipline: write a failing test first, then implement only what the test proves. That gives the agent a concrete target, exposes missing assumptions early, and helps keep fixes narrow. For test-driven-development for Skill Testing, this is often the difference between a plausible patch and a verified one.
When it is a bad fit
Do not use this skill for changes that have no runtime behavior: copy edits, static content updates, or pure config tweaks. If the project has little or no test coverage, the skill can still help, but adoption will be slower because the test harness itself may need setup before the workflow pays off.
How to Use test-driven-development skill
Install and inspect the skill
Use the repository install flow for test-driven-development install:
npx skills add addyosmani/agent-skills --skill test-driven-development
After install, start with SKILL.md. In this repository, there are no extra rules/, resources/, or scripts/ folders to lean on, so the main job is reading the skill file carefully and mapping its guidance to your codebase.
Turn a vague task into a testable prompt
The best test-driven-development usage starts with a behavior statement, not a solution request. Good input sounds like: “Add a failing test for empty email validation, then implement the minimal fix in src/auth.ts.” Weak input sounds like: “Make login better.” Include the observable outcome, the file or module if known, and the regression risk you care about.
Follow the RED-GREEN-REFACTOR loop
Use the skill as a workflow: first write a test that fails for the current code, then write the smallest code change that makes it pass, then refactor only if the test still passes. If the failure is hard to reproduce, stop and sharpen the test case before touching implementation. The skill works best when the failing case is specific enough to prove the bug.
Read the right files first
For this repo, the most important first read is SKILL.md. Then inspect the local test setup in your target project: test runner config, existing test conventions, and the nearest tests around the code you plan to change. If the project already has strong patterns, follow them exactly; if not, keep the test minimal and explicit.
test-driven-development skill FAQ
Is this only for experienced engineers?
No. Beginners can use test-driven-development, but they need a clear starting point: one behavior, one failing test, one minimal fix. The skill is easier to learn on small bug fixes than on broad feature work.
How is this different from a normal prompt?
A normal prompt may ask for code that “works.” This skill asks for proof. The test-driven-development guide pushes the agent to define success as a passing test, which reduces ambiguity and makes review easier.
When should I not choose it?
Skip it for documentation, formatting, or changes that cannot be expressed as runtime behavior. Also skip it if the project has no viable test harness and you only need a quick non-behavioral edit.
Does it fit all ecosystems?
Yes in principle, but the exact test commands, assertions, and file structure depend on the stack. The skill is framework-agnostic; your local repo conventions decide whether you use Jest, Vitest, pytest, JUnit, or another runner.
How to Improve test-driven-development skill
Give the agent a sharper failure case
The strongest input names the failing behavior, the expected result, and the boundary condition. Example: “When parseDate("") runs, it should throw InvalidDateError; add the test first, then patch the parser.” This helps the test-driven-development skill avoid vague implementation guesses.
Share the existing test style
Mention nearby test files, naming patterns, and any helpers or fixtures already used in the project. If the repo uses table-driven tests, mocks, or integration tests for similar behavior, say so. Matching local convention improves trust and keeps the output mergeable.
Watch for the common failure modes
The biggest mistakes are writing implementation before the test, using a test that already passes, and overexpanding the fix beyond the failing case. If the first output is too broad, ask for the smallest possible failing test and one minimal patch only. That is usually the fastest route to reliable test-driven-development usage.
Iterate with evidence, not guesses
After the first pass, ask for the next proof point: another edge case, a regression test, or a refactor that preserves passing tests. If the bug is subtle, request a before/after behavior summary plus the exact test name to add. That keeps the workflow anchored in observable behavior instead of assumptions.
