write-a-skill
by mattpocockwrite-a-skill helps Skill Authoring teams create new agent skills with a clear SKILL.md, smart file layout, and better trigger wording for reliable agent routing.
This skill scores 78/100, which means it is a solid directory listing for users who want help authoring new agent skills. It gives enough structure, triggers, and drafting guidance to be more useful than a generic prompt, though users should expect a documentation-oriented helper rather than a fully tooled skill-building system.
- Strong triggerability: the frontmatter clearly says to use it when a user wants to create, write, or build a new skill.
- Operationally clear workflow: it gives a 3-step process covering requirements gathering, drafting, and user review.
- Good leverage for skill authors: it includes a concrete folder structure and a SKILL.md template with progressive disclosure guidance.
- No bundled examples, scripts, or reference files, so agents must translate the guidance into a finished skill without reusable artifacts.
- The excerpt emphasizes structure and process more than validation criteria or edge-case handling, which may leave some guesswork when refining complex skills.
Overview of write-a-skill skill
What the write-a-skill skill does
write-a-skill is a meta-skill for Skill Authoring: it helps you create a new skill package with the right structure, a usable SKILL.md, and optional supporting files such as references, examples, or scripts. Its real value is not just “write documentation,” but turning a vague capability idea into something an agent can reliably discover and use.
Best fit for this skill
The write-a-skill skill is best for:
- people creating a new skill from scratch
- teams standardizing how skills are authored
- authors who need to decide whether instructions belong in
SKILL.md, a reference file, or a script - anyone who wants better agent routing, not just prettier docs
If you already know your exact folder structure and only need prose editing, a normal prompt may be enough.
The job to be done
Most skill authors get blocked on three things: scope, trigger wording, and file layout. write-a-skill addresses those directly by pushing you to gather requirements first, then draft the minimal working skill, then review it against real use cases before treating it as done.
What makes write-a-skill different
The main differentiator is its emphasis on agent usability:
- the skill description matters because it is what the agent sees when deciding whether to load the skill
SKILL.mdshould stay concise and action-oriented- larger detail should move into separate files instead of bloating the main entrypoint
- scripts are recommended only when deterministic operations are actually needed
That makes write-a-skill more useful than a generic “write me a skill” prompt for authors who care about invocation quality.
What you should know before installing
This skill is lightweight: the repository evidence shows only SKILL.md, with no extra scripts or bundled references. That means adoption is easy, but it also means you should expect guidance, templates, and process—not automation. If you want code generation, testing scaffolds, or validation tooling, you will need to add those yourself.
How to Use write-a-skill skill
write-a-skill install context
In a skills-enabled environment, install write-a-skill from the mattpocock/skills repository using your platform's normal skill installation flow. A commonly used command pattern is:
npx skills add mattpocock/skills --skill write-a-skill
If your runtime uses a different installer, adapt the repository and skill slug accordingly. The important part is that the source is mattpocock/skills and the skill path is write-a-skill.
Read this file first
Start with:
SKILL.md
There are no additional support files in this skill directory, so nearly all of the useful guidance lives there. This is good for quick evaluation: you can understand the approach fast without spelunking a large tree.
What input write-a-skill needs
To get good output from write-a-skill usage, bring the inputs it explicitly asks for:
- the task or domain the new skill should cover
- the use cases it must handle
- whether it needs executable scripts or only instructions
- any reference material to include
If you skip these, the generated skill will often be too broad, too generic, or structured around imagined needs instead of real ones.
Turn a rough idea into a strong request
Weak input:
I need a skill for writing release notes.
Stronger input:
Create a skill for generating software release notes from merged PRs and commit summaries. It should support weekly releases, patch releases, and urgent hotfixes. No scripts for now. Include a short Quick start, a review checklist, and examples for internal engineering teams.
The stronger version improves:
- scope boundaries
- target users
- workflow expectations
- file decisions
- trigger wording for the final description
A practical write-a-skill workflow
Use this sequence when authoring with write-a-skill:
- Define the capability in one sentence.
- List 3–5 real tasks the skill must support.
- Decide whether any step is deterministic enough for a script.
- Ask the skill to draft
SKILL.md. - Split large detail into
REFERENCE.mdorEXAMPLES.mdif needed. - Review whether the description would help an agent choose the skill correctly.
- Revise after testing with real prompts.
This matches the repository’s own process: gather requirements, draft, then review with the user.
How to shape the final skill structure
The source suggests a simple structure:
skill-name/
├── SKILL.md
├── REFERENCE.md
├── EXAMPLES.md
└── scripts/
Use it selectively:
SKILL.mdfor core instructions and entry flowREFERENCE.mdfor detailed rules or long backgroundEXAMPLES.mdwhen examples materially improve executionscripts/only for stable, repeatable operations
Do not add files just because the template shows them.
Why the description matters so much
A key point in write-a-skill guide is that the description is the main routing signal. If the description is vague, your skill may not be loaded when it should be. If it is too broad, it may be loaded for the wrong tasks.
Good description pattern:
- what the skill does
- when to use it
- what kinds of requests should trigger it
Bad description pattern:
- buzzwords
- broad claims
- no trigger cues
- no domain or task specificity
What a good SKILL.md should contain
For most new skills, aim for:
- a clear Quick start
- one or more workflows with decision points
- concise instructions that tell the agent what to do first
- links to separate files for long details
This is where write-a-skill for Skill Authoring is most helpful: it nudges you toward progressive disclosure instead of dumping everything into one giant markdown file.
When to add scripts
Add scripts only if the task includes deterministic operations such as:
- formatting or transforming files in a repeatable way
- extracting structured data
- generating stable artifacts from known inputs
Do not add scripts for judgment-heavy tasks that are mostly instruction and reasoning. In those cases, clearer workflow writing is usually the better investment.
A high-signal prompt you can use
Try a prompt like this when invoking write-a-skill:
Use write-a-skill to draft a new skill called "triage-bug-reports".
Requirements:
- Domain: software support and bug intake
- Use cases: classify reports, request missing repro steps, suggest severity, route to correct team
- Scripts: none initially
- Reference material: include a short checklist and 3 example bug reports
- Constraints: keep SKILL.md concise and move detailed examples into EXAMPLES.md
- Success criteria: an agent should know exactly when to load this skill from the description alone
This works because it gives the skill enough information to make structure decisions instead of forcing generic output.
write-a-skill skill FAQ
Is write-a-skill worth using over a normal prompt?
Yes, if your problem is skill authoring quality rather than raw writing speed. The write-a-skill skill gives you a process: gather requirements, choose file boundaries, and optimize for agent discoverability. A normal prompt may produce a draft faster, but often misses routing cues and structure decisions.
Is write-a-skill beginner-friendly?
Yes. It is one of the more approachable skills because the repository is small and the workflow is explicit. Beginners can use it to avoid common first-time mistakes like stuffing all details into SKILL.md or writing a description that never triggers correctly.
When should I not use write-a-skill?
Skip write-a-skill if:
- you only need light editing on an existing mature skill
- your organization already has a strict authoring template and validation pipeline
- you need automated testing, packaging, or publishing support rather than writing guidance
In those cases, the skill may be too lightweight for your actual bottleneck.
Does it create the whole skill automatically?
Not really. It helps you design and draft the skill, but it does not ship with helper scripts, generators, or validators in this folder. Think of it as structured authoring guidance, not a full scaffolding tool.
How does it compare with copying another skill?
Copying can be faster, but it also imports irrelevant assumptions. write-a-skill usage is better when you want to derive the right shape from your use cases instead of retrofitting a borrowed structure.
What is the biggest adoption risk?
The biggest risk is giving weak requirements. Because the source skill is mostly process guidance, poor input leads directly to generic output. If you want a high-quality result, the burden is on you to specify tasks, triggers, boundaries, and whether scripts are justified.
How to Improve write-a-skill skill
Start with real triggers, not abstract capability labels
The fastest way to improve write-a-skill output is to describe the moments when an agent should load the new skill. For example, “when the user asks to summarize weekly product changes into stakeholder-ready release notes” is better than “release management.”
This directly improves the final description and routing quality.
Provide use cases with edge conditions
Do not stop at the happy path. Include:
- common requests
- difficult variants
- what the skill should refuse or defer
- whether outputs should be terse, formal, checklist-based, or example-driven
This helps the draft avoid being overgeneralized.
Keep SKILL.md short and move bulk elsewhere
One common failure mode is overloading the main file. If the first draft becomes long or repetitive, split it:
- core actions in
SKILL.md - deep explanation in
REFERENCE.md - demonstrations in
EXAMPLES.md
This follows the skill’s own progressive disclosure advice and usually makes the skill easier for agents to execute.
Write a better description than your first instinct
Authors often write descriptions for humans, not for agent selection. Improve the description by checking:
- does it name the task plainly?
- does it include “use when” trigger language?
- does it distinguish this skill from adjacent ones?
- would an agent know when not to use it?
This is one of the highest-leverage improvements you can make.
Add scripts only after proving the need
Another common mistake is premature scripting. First test whether clear instructions are enough. Only add a scripts/ helper when you can point to a repeatable task that is better handled deterministically. This keeps the skill easier to maintain and less brittle.
Review the draft against three real prompts
After the first draft, test it with:
- an ideal request
- a messy but still valid request
- a borderline request that should not fully match
If the skill behaves the same for all three, the scope is probably too loose. Tighten the description and workflow.
Ask for revision with specific feedback
When iterating on write-a-skill, do not say “make it better.” Say things like:
- tighten the trigger conditions in the description
- move long examples out of
SKILL.md - add a review checklist for output quality
- clarify when to use a script versus instructions
- narrow the skill to internal support teams only
Specific revision requests produce much stronger second drafts than generic refinement asks.
Improve for maintainability, not just first-run output
A skill that works once but is hard to update will age poorly. Before finalizing, check:
- are file names obvious?
- are instructions duplicated unnecessarily?
- is the workflow stable if new examples are added later?
- can another author tell what belongs in the main file versus support files?
That is the practical standard to use when evaluating write-a-skill for Skill Authoring.
