verification-before-completion
by obraverification-before-completion is a final-check skill that blocks unsupported completion claims. Learn when to use it, how to install it from obra/superpowers, and how to match each status claim to fresh verification evidence.
This skill scores 78/100, which makes it a solid listing candidate for directory users. It is highly triggerable and easy to understand: the description and "Iron Law" clearly tell an agent when to invoke it and what behavior to enforce before claiming success. Its value is mainly behavioral rather than procedural, so it improves reliability, but users should expect to supply project-specific verification commands themselves.
- Very clear trigger: use it before claiming work is complete, fixed, or passing, especially before commits or PRs.
- Operational core is explicit: identify the proving command, run it fresh, read full output, verify the claim, then report with evidence.
- Strong anti-handwaving guidance with concrete failure examples like tests, lint, build, and bug-fix verification.
- No support files, scripts, or repo-specific command references, so agents still need to infer the right verification command from context.
- More policy than executable workflow; it enforces discipline well but offers limited practical help for choosing commands in unfamiliar projects.
Overview of verification-before-completion skill
What verification-before-completion is for
The verification-before-completion skill is a strict final-check workflow for moments when you are about to say work is done, fixed, passing, or ready for review. Its job is simple: stop unsupported success claims and force fresh evidence first.
This is most useful for AI-assisted coding, agent workflows, commit preparation, and PR handoff. If you have ever seen “tests should pass,” “the bug is fixed,” or “build looks good” stated without actually running the right command, this skill targets that exact failure mode.
Who should install this skill
Best-fit users include:
- developers using AI agents to edit code
- reviewers who want cleaner, evidence-backed status updates
- teams tired of optimistic but unverified completion messages
- anyone using Skill Validation patterns where proof matters more than confidence
If you mainly want code generation, planning, or broad debugging help, this is not a replacement for those skills. The verification-before-completion skill is a guardrail, not a full delivery workflow.
The real job-to-be-done
The real job is not “run tests.” It is:
- identify what evidence would prove the claim
- run that exact verification now
- read the actual output
- only make the claim the evidence supports
That sounds obvious, but it is exactly where many AI-assisted workflows break. This skill turns that expectation into a hard gate before completion language.
What makes this skill different
The key differentiator is its narrowness. verification-before-completion does not try to be smart about your whole repository. It enforces one high-value rule:
no completion claims without fresh verification evidence
That makes it especially good as a finishing-layer skill. Compared with a normal “be careful and verify” prompt, this one is easier to trigger consistently because it gives a repeatable decision function: identify, run, read, verify, then speak.
When this skill is a strong fit
Use the verification-before-completion skill when you are about to say things like:
- “tests pass”
- “the bug is fixed”
- “the build succeeds”
- “this is ready to merge”
- “the linter is clean”
Those claims require different evidence. The skill helps prevent one common error: proving something adjacent, then overstating the result.
How to Use verification-before-completion skill
Install context for verification-before-completion
Install it from the obra/superpowers repository:
npx skills add https://github.com/obra/superpowers --skill verification-before-completion
Because this skill is contained in a single SKILL.md, adoption is lightweight. There are no helper scripts or extra resource files to understand first.
Read this file first
Start with:
skills/verification-before-completion/SKILL.md
That file contains the entire behavioral contract. Since the repository support structure is minimal here, reading SKILL.md first is enough to understand whether the skill matches your workflow.
What input the skill needs from you
The verification-before-completion skill works best when you provide three things:
- the claim you are about to make
- the command that would actually prove it
- any environment limits blocking verification
Example inputs:
- “I want to say the fix works. The proving command is
pytest tests/api/test_login.py -q.” - “I want to say the build passes. The expected verification is
npm run build.” - “I think lint is clean, but I have not run
ruff check .yet.”
Without a concrete claim and command, the skill can only give generic caution.
Turn a rough goal into a usable prompt
Weak prompt:
- “Can I say this is done?”
Better prompt:
- “Before I claim this is complete, apply the verification-before-completion skill. The claim is: ‘the login bug is fixed.’ The best proving command is
pytest tests/auth/test_login_bug.py -q. If that is not enough, tell me what additional verification is required.”
Why this is better:
- it names the claim
- it proposes a proof path
- it lets the skill challenge underpowered verification
How to call the skill in practice
Use verification-before-completion right before any completion-style message, commit summary, or PR note. A practical workflow is:
- finish the code change
- decide what exact claim you want to make
- identify the command that proves that claim
- run the full command fresh
- inspect output and exit status
- downgrade or qualify the claim if evidence is incomplete
This sequencing matters. The skill is most valuable at the point where people are tempted to summarize optimistically.
Match claims to the right proof
A major practical benefit of the verification-before-completion skill is that it stops proof mismatch.
Examples:
-
Claim: “Tests pass”
Proof: the relevant full test command with zero failures -
Claim: “Linter is clean”
Proof: the linter command showing zero errors -
Claim: “Build succeeds”
Proof: the build command exiting successfully -
Claim: “Bug is fixed”
Proof: a verification step that reproduces the original failure path and now passes
A passing linter does not prove a successful build. A changed file does not prove a bug fix. This distinction is where many weak agent outputs fail.
What counts as fresh verification evidence
Fresh means the command was run for this exact state of the work, not remembered from an earlier attempt. In practice, that means:
- not relying on old terminal output
- not assuming unchanged files imply unchanged results
- not inferring success from nearby signals
- not using partial verification for a broader claim
If you changed code after the last run, the old run is stale for completion purposes.
What to do when you cannot run verification
Sometimes the environment blocks execution: missing dependencies, no credentials, unavailable services, unsupported OS, or time constraints. In that case, do not make the stronger claim.
Use evidence-based language instead:
- “I made the change, but I could not run
npm testin this environment.” - “The fix is implemented, but build verification remains unconfirmed.”
- “I verified formatting only; full integration tests were not run.”
That still makes the verification-before-completion usage successful, because the skill is about truthful status reporting, not forced certainty.
Practical prompt pattern for agents
A strong reusable prompt:
“Use the verification-before-completion skill before any success claim. For each claim, identify the proving command, run it fresh if possible, read the full output, and only state what the evidence confirms. If verification is blocked, report the limitation instead of implying success.”
This works well in agent instructions, PR assistants, and commit-generation flows.
Best workflow for Skill Validation use cases
For verification-before-completion for Skill Validation, treat the skill as the final validator, not the main worker. A good sequence is:
- use another skill to implement or debug
- switch to
verification-before-completion - verify the narrow claim you want to publish
- produce a summary with command and result evidence
That separation improves trust because implementation and validation are not blurred together.
verification-before-completion skill FAQ
Is verification-before-completion just a reminder to run tests?
No. The verification-before-completion skill is stricter than a reminder. It enforces a claim-to-proof mapping. The point is not merely “run something,” but “run the command that proves the exact statement you are about to make.”
Is this skill useful for beginners?
Yes, especially for beginners who are still learning what different checks actually prove. It teaches a strong habit: do not generalize beyond your evidence. That habit improves both technical accuracy and reviewer trust.
When should I not use verification-before-completion?
Do not use it as your only coding or debugging skill. It will not design architecture, locate root causes, or write a full test plan for you. It is a finishing checkpoint, best paired with implementation-oriented skills.
How is this better than an ordinary prompt?
An ordinary prompt often says “verify before answering,” but agents still drift into soft claims. The verification-before-completion skill is better when you want a consistent pre-completion gate with explicit consequences for unsupported assertions.
Does it require a specific stack or toolchain?
No. The skill is stack-agnostic because its logic is about evidence, not language or framework. You supply the proving command for your environment, whether that is pytest, npm test, go test, cargo test, or another verifier.
Can I use it when full verification is too expensive?
Yes, but then your claim must narrow accordingly. If you only ran a targeted test, say that targeted test passed. Do not upgrade that into “everything passes” unless you ran the broader proof.
How to Improve verification-before-completion skill
Give the claim before asking for validation
The biggest output-quality improvement is to state the exact sentence you are tempted to write. For example:
- weak: “validate this”
- strong: “I want to say: ‘the payment bug is fixed and tests pass’”
That helps the skill separate compound claims and assign proof to each part.
Name the best proving command explicitly
Do not make the skill guess your repository conventions if you already know them. Stronger input:
- “The canonical project check is
make test.” - “The bug is proven by
pytest tests/payments/test_refund.py -qplusnpm run build.”
This reduces false confidence based on incomplete checks.
Separate implementation status from verified status
A common failure mode is mixing “I changed the code” with “the issue is resolved.” Improve verification-before-completion usage by asking for a two-part response:
- what changed
- what was actually verified
That keeps summaries honest even when execution is blocked.
Avoid broad claims from narrow checks
If you ran one focused test, do not ask the skill to certify the whole repo. Better phrasing:
- “Can I say the targeted regression test now passes?”
- not “Can I say the system is fully fixed?”
This is one of the highest-value habits the skill encourages.
Ask for downgrade language when evidence is partial
A practical way to improve the verification-before-completion guide in real work is to request fallback wording:
- “If the claim is too strong for the evidence, rewrite it to the strongest accurate version.”
That turns the skill into a communication-quality tool, not just a blocker.
Re-run after material changes
If you edit code after a successful verification run, use the verification-before-completion skill again. Fresh evidence is tied to the current state, not the previous draft. This matters most in fast agent loops where a “small final tweak” can invalidate earlier checks.
Use evidence in the final summary
For better trust, include the proof directly in the completion note:
- command run
- pass/fail outcome
- any limits or omissions
Example:
- “Verified with
pytest tests/auth/test_login_bug.py -q: passed, 1 test, 0 failures.” - “Did not verify full integration suite in this environment.”
That style makes the skill more valuable to reviewers and future you.
Watch for the most common failure pattern
The most common misuse of verification-before-completion for Skill Validation is claiming success from intent, code changes, or expectation rather than output. If you want better results, ask one final question before every completion message:
“What command output would make this statement true?”
If you cannot answer that clearly, the claim is probably not ready.
