O

writing-skills

by obra

writing-skills is a Skill Authoring guide for creating, editing, and validating agent skills with a test-driven workflow. Learn the key files, prerequisites, and practical steps for pressure scenarios, baseline tests, and concise SKILL.md iteration.

Stars121.9k
Favorites0
Comments0
AddedMar 29, 2026
CategorySkill Authoring
Install Command
npx skills add obra/superpowers --skill writing-skills
Curation Score

This skill scores 82/100, which means it is a solid directory listing candidate for users who want a real method for authoring and validating agent skills rather than generic writing advice. Repository evidence shows substantial workflow content, explicit use cases, a concrete TDD-style framework for skill creation/testing, and supporting reference files that reduce guesswork for agents and installers.

82/100
Strengths
  • Strong triggerability: frontmatter and opening sections clearly say to use it when creating, editing, or verifying skills before deployment.
  • High agent leverage: it gives a specific TDD-for-documentation workflow, including pressure scenarios, baseline failure testing, and refactoring against loopholes.
  • Good progressive disclosure: SKILL.md is backed by focused reference files and a worked example in examples/CLAUDE_MD_TESTING.md.
Cautions
  • The skill depends on prior understanding of superpowers:test-driven-development, so some users may need another skill first before they can apply it well.
  • Operational support is document-heavy and mostly procedural; there is no install command or packaged helper metadata, so adoption relies on careful reading.
Overview

Overview of writing-skills skill

What writing-skills does

The writing-skills skill is a Skill Authoring guide for creating, editing, and validating agent skills using a test-driven workflow. Its core idea is simple but opinionated: treat skill writing like TDD for process documentation. Instead of drafting advice first and hoping it works, you create pressure scenarios, observe failure without the skill, then write only the guidance that changes behavior.

Who should use writing-skills

Best fit readers are people authoring skills for Claude Code, Codex-style agent setups, or similar local skill directories. It is especially useful if you are writing skills that enforce discipline, verification steps, or workflows agents might skip under time pressure.

The real job-to-be-done

Most users do not need “help writing markdown.” They need a repeatable way to produce a SKILL.md that agents will actually discover, follow, and keep following when speed, confidence, or sunk-cost pressure pushes them to ignore process. writing-skills is built for that problem.

Why this skill is different from a generic prompt

A generic prompt can help draft a skill. writing-skills gives you a method for proving the skill changes behavior:

  • define pressure scenarios
  • run baseline tests without the skill
  • write documentation against observed failure modes
  • retest and refactor to close loopholes

That makes it more useful for Skill Authoring than a one-shot “write me a skill” instruction.

Important prerequisites and tradeoffs

The biggest adoption blocker is that writing-skills assumes you already understand the repository's TDD framing. The skill explicitly requires background in superpowers:test-driven-development. If you skip that foundation, the writing advice can feel stricter than necessary or oddly test-heavy.

The tradeoff is also clear: this workflow is slower than drafting from intuition, but much better when failure is costly or agents are likely to rationalize skipping the skill.

How to Use writing-skills skill

writing-skills install context

This repository notes that personal skills live in agent-specific directories such as ~/.claude/skills for Claude Code and ~/.agents/skills/ for Codex. If you install from the obra/superpowers repo with a skill manager, make sure the resulting skill lands in the directory your agent actually scans.

If you are manually reviewing before installation, start here:

  • skills/writing-skills/SKILL.md
  • skills/writing-skills/testing-skills-with-subagents.md
  • skills/writing-skills/anthropic-best-practices.md
  • skills/writing-skills/examples/CLAUDE_MD_TESTING.md

Read these files first

For fastest evaluation, use this reading path:

  1. SKILL.md for the main workflow and positioning
  2. testing-skills-with-subagents.md for how to run RED/GREEN/REFACTOR on skills
  3. anthropic-best-practices.md for concise authoring guidance
  4. examples/CLAUDE_MD_TESTING.md for realistic pressure scenarios
  5. persuasion-principles.md if your skill must resist rationalization
  6. graphviz-conventions.dot and render-graphs.js only if you want diagrams

That path gives better information gain than skimming the top of SKILL.md alone.

What input writing-skills needs from you

writing-skills works best when you bring concrete evidence, not just a topic. Strong inputs include:

  • the exact skill you want to create or revise
  • the agent behavior you want to change
  • examples of failure without the skill
  • situations where the agent is tempted to skip process
  • the target directory and platform where the skill will live

Weak input: “Help me write a testing skill.”

Strong input: “Create a skill that forces pre-deployment verification for database migrations. Agents currently skip rollback checks when fixes seem obvious.”

Turn a rough goal into a usable prompt

A good writing-skills usage pattern is to ask for all four parts at once:

  1. pressure scenarios
  2. baseline failure expectations
  3. the skill structure
  4. the validation plan

Example:

Use writing-skills for Skill Authoring.

Goal: Create a skill for release-checklist enforcement in ~/.claude/skills/release-checks.
Observed failures: agents skip smoke tests when changes look small; they rationalize that CI is enough.
Need:
- 3 pressure scenarios that trigger those shortcuts
- baseline RED expectations without the skill
- a concise SKILL.md outline
- refactor ideas to close loopholes after first test run
Keep it concise and optimized for discoverability.

This is much stronger than asking for “a polished skill doc” because it supplies the failure model the skill must fix.

Suggested workflow for writing-skills usage

A practical workflow looks like this:

  1. Define the behavior to enforce
  2. Write 2–5 pressure scenarios
  3. Test the agent without the skill
  4. Capture exact rationalizations and shortcuts
  5. Draft SKILL.md only against those failures
  6. Re-test with the skill loaded
  7. Tighten wording where the agent still slips
  8. Remove any explanation that did not improve compliance

That last step matters because the bundled best-practices guidance emphasizes token efficiency and concise instructions.

When the test-driven method matters most

Use writing-skills for Skill Authoring when the skill has compliance cost:

  • extra testing
  • slower verification
  • documentation checks
  • restart/rework requirements
  • rules that conflict with speed incentives

This method matters less for pure reference skills like API syntax cheatsheets, where agents have little reason to bypass the content.

How to use the subagent testing reference

testing-skills-with-subagents.md is the practical companion document. It helps you test whether your skill survives real pressure instead of only sounding correct in calm conditions. Read it when you need:

  • scenario formats
  • RED/GREEN/REFACTOR mapping
  • rationalization capture
  • examples of pressure-driven noncompliance

If your first draft seems fine but adoption is weak, this file is usually the fastest route to improvement.

Use the example scenarios, but do not copy them blindly

examples/CLAUDE_MD_TESTING.md is useful because it shows what pressure scenarios actually look like: time pressure, sunk cost, authority, and familiarity bias. The mistake is copying those scenarios verbatim for unrelated skills.

Instead, adapt the pressure type to your own workflow. For example:

  • deployment skill → urgency and rollback fear
  • review skill → confidence and speed bias
  • security skill → “just this once” rationalization
  • style skill → low adoption cost, so lighter testing

How persuasion guidance fits the workflow

persuasion-principles.md is not filler; it is there because some skills fail even when the process is clear. If your skill must enforce behavior that agents commonly resist, stronger phrasing can help. The file suggests concrete patterns such as authority, commitment, and explicit announcements.

Use this carefully. The point is not to make the skill louder. It is to make required actions harder to rationalize away.

Concision rules that affect output quality

One of the most valuable parts of this repository is the reminder that skills share context budget. writing-skills is not telling you to write more; it is telling you to write only what changes behavior.

Good signs:

  • concrete triggers
  • crisp required actions
  • short examples tied to real failure
  • minimal background

Bad signs:

  • long motivational prose
  • repeated definitions
  • process history
  • generic “best practices” that Claude already knows

Optional graph tooling

The skill directory includes render-graphs.js, which extracts dot blocks from SKILL.md and renders SVG diagrams if graphviz is installed. This is optional and mainly useful when your workflow has branching states or review gates that humans need to inspect visually. It is not required for using writing-skills skill, but it can help maintainers debug process complexity.

writing-skills skill FAQ

Is writing-skills worth installing if I can already write prompts?

Yes, if your problem is reliability rather than drafting speed. Ordinary prompts can generate a decent-looking skill quickly. writing-skills is useful when you need confidence that the final skill changes agent behavior under pressure.

Is writing-skills beginner-friendly?

Partly. The writing itself is accessible, but the method assumes comfort with TDD-style thinking. New Skill Authoring users may need to read the repository's TDD-related material first or they may mistake the workflow for unnecessary ceremony.

When is writing-skills a bad fit?

Skip writing-skills for:

  • simple reference-only skills
  • one-off notes that are not meant to be reused
  • topics where there is no realistic temptation to violate the guidance
  • situations where you cannot run any before/after testing

In those cases, a lighter authoring workflow is often enough.

How is writing-skills different from Anthropic's skill best practices?

The included anthropic-best-practices.md focuses on concise, discoverable, context-efficient skill writing. writing-skills adds a stronger behavior-change lens: observe failures first, then write only what fixes them. They are complementary, not competing guides.

Does writing-skills require extra repository tooling?

No major tooling is required to benefit from the method. The testing guidance and examples are the key assets. Graph rendering is optional, and there are no required support scripts for the core authoring workflow.

Can I use writing-skills for editing an existing skill?

Yes. In fact, that is one of the better use cases. If a skill exists but agents still ignore or misuse it, writing-skills helps you identify the actual failure mode, trim useless content, and rewrite the instructions that matter most.

How to Improve writing-skills skill

Start from observed failures, not ideal documentation

The fastest way to improve writing-skills results is to bring failure evidence. If you only describe the ideal process, the output tends to become generic. If you provide actual shortcuts agents took, the resulting skill becomes sharper and shorter.

Provide stronger pressure scenarios

Good scenarios create real temptation to skip the skill. Include:

  • time pressure
  • confidence from prior experience
  • sunk cost
  • authority pressure from a human
  • “the fix is obvious” framing

Those conditions reveal where your instructions are too soft or too vague.

Capture the agent's exact rationalizations

Do not summarize the failure as “ignored the skill.” Record what the agent actually said or implied:

  • “This is a small change”
  • “CI will catch it”
  • “I already know this pattern”
  • “Reading the skill would take too long”

Those rationalizations tell you what your revised writing-skills usage prompt and final skill wording must directly address.

Tighten wording where compliance matters

If the skill is meant to enforce non-optional behavior, vague language hurts. Replace soft suggestions with explicit triggers and required actions. The persuasion guide is useful here, but the main improvement comes from specificity:

  • when to load the skill
  • what to do first
  • what cannot be skipped
  • what counts as success

Remove content that does not change behavior

A common failure mode in writing-skills skill output is overexplaining. If a paragraph did not help discovery, compliance, or testing, cut it. The repository's best-practices file makes this a central rule for a reason.

Iterate after the first passing result

A first “GREEN” run is not enough if the skill only works in easy conditions. Re-test with harsher prompts and alternate phrasings. Ask whether the skill still works when the agent is rushed, certain, or trying to preserve already-finished work.

Pair writing-skills with a repository-specific example

If your team has a recurring workflow, include one worked example in the target skill's domain. This often improves adoption more than adding more abstract rules. Keep the example short and pressure-tested rather than encyclopedic.

Improve prompts by asking for structure, not polish

When invoking writing-skills, ask for:

  • scenario list
  • failure analysis
  • concise skill outline
  • loophole-closing edits

Do not lead with “make it polished” or “make it comprehensive.” That usually increases length without improving compliance.

Check whether the skill should exist at all

One useful outcome from writing-skills guide material is discovering that some topics do not need a skill. If the process is obvious, low-risk, or rarely repeated, a skill may add maintenance cost without enough behavioral gain. That is a valid conclusion, and it improves repository quality.

Use writing-skills to refactor, not just create

The highest-value use of writing-skills is often refactoring an existing skill after watching it fail. That turns the method from documentation drafting into behavior engineering, which is where this repository provides the most practical value.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...