sharp-edges
by trailofbitsThe sharp-edges skill helps you find APIs, configs, and interfaces where the easy path leads to insecure use. Use it to review authentication flows, cryptographic wrappers, dangerous defaults, null or zero semantics, and misuse-prone design choices. It is a strong fit for sharp-edges for Security Audit work when you need concrete footguns, not generic security guesses.
This skill scores 85/100, which means it is a solid listing candidate for directory users who need misuse-resistance analysis for APIs, configuration, and crypto-facing interfaces. The repository gives enough workflow structure and reference depth to justify installation, though it is more analysis-oriented than task-automating and lacks an explicit install command in SKILL.md.
- Strong triggerability: the description names clear use cases and triggers such as footgun, misuse-resistant, secure defaults, API usability, and dangerous configuration.
- Operationally useful workflow: the skill defines when to use it and not use it, and the agent section describes a four-phase analysis workflow with on-demand language-specific references.
- Good supporting evidence: 16 reference files cover authentication, configuration, cryptographic APIs, case studies, and multiple languages, giving agents concrete patterns to consult.
- No install command is provided in SKILL.md, so users may need to infer setup or usage from the repository structure.
- The skill is broad and analytical rather than narrowly scoped, so agents may still need judgment to map a specific codebase or interface to the right reference.
Overview of sharp-edges skill
What sharp-edges does
The sharp-edges skill helps you spot security footguns: APIs, config options, and interface choices that make insecure use the easiest use. It is most useful when you need a sharp-edges for Security Audit lens on a library, service, or SDK and want to know whether the design itself encourages mistakes.
Who should install it
Use the sharp-edges skill if you review API design, authentication flows, cryptographic wrappers, or security-sensitive configuration. It is a strong fit for engineers, appsec reviewers, and AI agents that need to judge whether an interface is misuse-resistant rather than merely functional.
Why it differs from a normal prompt
A generic prompt often asks “is this secure?” and returns shallow findings. sharp-edges is aimed at the harder question: does the easy path lead to insecure outcomes? That makes it better for finding dangerous defaults, ambiguous zero values, algorithm selection problems, and APIs that invite silent misuse.
How to Use sharp-edges skill
Install and load it correctly
Use the repository install flow, then run the skill in the context of the target codebase:
npx skills add trailofbits/skills --skill sharp-edges
For best results, pair the skill with the component you are evaluating, not the whole monorepo by default. The skill is strongest when you give it a specific API, config file, package, or authentication surface.
Give the skill the right input
A good sharp-edges usage prompt names the surface, the threat, and the decision you want made. For example:
- “Review this login API for misuse-resistant design and footguns.”
- “Assess whether this config schema has dangerous zero/null semantics.”
- “Evaluate this crypto wrapper for algorithm-selection and downgrade risks.”
- “Use
sharp-edgesto identify insecure defaults before we ship this SDK.”
Include the actual interface, sample configs, and any “safe by default” expectations. If you only say “analyze security,” the result will be less precise.
Read these files first
Start with SKILL.md, then inspect the references/ files that match your stack and surface area. The most useful first reads are usually:
references/config-patterns.mdreferences/crypto-apis.mdreferences/auth-patterns.md- one language-specific file such as
references/lang-python.mdorreferences/lang-javascript.md references/case-studies.mdfor real misuse patterns
These references help you translate a vague concern into concrete checks instead of guessing.
Workflow that produces better findings
A practical sharp-edges guide workflow is:
- Identify the security decision the interface exposes.
- Look for defaults, sentinel values, and “skip” behavior.
- Test whether untrusted input can choose algorithms, modes, or trust boundaries.
- Check whether the safe path is simpler than the unsafe one.
- Validate whether the design prevents misuse or merely documents it.
If you are using the sharp-edges skill on your own prompt, explicitly ask for footguns, unsafe defaults, and boundary cases. That nudges the analysis toward design-level risk instead of implementation bugs.
sharp-edges skill FAQ
Is sharp-edges only for code reviews?
No. It is also useful for reviewing API proposals, SDK ergonomics, config schemas, and security-sensitive product settings. The best fit is any place where a user can accidentally choose an insecure option.
When should I not use it?
Do not use sharp-edges for ordinary implementation bugs, general business logic mistakes, or performance tuning. Those need different review methods. This skill is about design-level misuse risk, not every security issue.
Is it beginner-friendly?
Yes, if you can describe the interface you are reviewing and what “safe” should mean. Beginners get the best value when they provide a concrete file, endpoint, or config block instead of a broad request.
Does it replace a security audit?
No. It supports a Security Audit by finding footguns and insecure defaults, but it does not replace threat modeling, code review, or exploit validation. Use it early to catch design mistakes before they spread.
How to Improve sharp-edges skill
Provide the interface, not just the goal
The sharp-edges skill improves when you include the exact surface under review. Better input looks like:
- “Review
AuthConfiginconfig.tsfor null/zero semantics and insecure defaults.” - “Assess whether this JWT verification wrapper allows algorithm confusion.”
- “Check whether this password reset API is misuse-resistant for callers.”
That is better than “find problems,” because the analysis can focus on the sharp edges that matter.
Tell it what counts as unsafe
State your security expectations up front: no algorithm downgrades, no silent fallback, no zero meaning “disable protection,” no caller-controlled trust decisions. This makes the sharp-edges skill compare the design against a clear safety bar.
Expect the first pass to surface design issues
The first output often identifies obvious footguns: ambiguous flags, dangerous defaults, missing validation, or API shapes that invite insecure use. Improve the next pass by asking for one of these:
- “List the highest-risk misuse paths first.”
- “Show which calls are safe by default and which require extra care.”
- “Map each finding to the exact input or option that makes it dangerous.”
Iterate with real examples
The fastest way to sharpen results is to provide real call sites, sample configs, or API docs. If a finding is about sharp-edges for Security Audit, ask the model to test the risky path with concrete values like null, 0, empty strings, or user-selected algorithms. That usually reveals whether the edge is theoretical or actually exploitable.
