O

verification-before-completion

by obra

Enforce a strict rule that no work is declared done, fixed, or passing until you run the actual verification command, inspect the output, and base your claim on fresh evidence.

Stars0
Favorites0
Comments0
AddedMar 27, 2026
CategoryTest Automation
Install Command
npx skills add https://github.com/obra/superpowers --skill verification-before-completion
Overview

Overview

What is the verification-before-completion skill?

The verification-before-completion skill defines a strict workflow rule for developers and agents: never say work is done, tests pass, or a bug is fixed unless you have just run the relevant verification command and checked its output. Its core principle is:

Evidence before claims, always.

In practice, this means that before you write "tests are passing", "the build is green", or "the bug is fixed", you must:

  • Identify the command that proves the claim
  • Run it freshly (not relying on old runs or assumptions)
  • Read the output and exit status
  • Only then state the result, backed by that evidence

Who is this skill for?

Use the verification-before-completion skill if you:

  • Work on codebases where failing tests, broken builds, or unverified fixes frequently slip through
  • Rely on agents or automated tools that might otherwise claim success without actually running checks
  • Want a disciplined, repeatable test and verification practice as part of your development workflow

It is especially relevant for:

  • Test automation: ensuring test suites are actually executed and interpreted correctly
  • Workflow automation: enforcing that completion steps always include verification commands
  • Teams practicing code review, continuous integration, and continuous delivery who want fewer surprises after a "done" status.

What problem does verification-before-completion solve?

Without a guardrail, it is easy to:

  • Assume tests will pass because the change is "small"
  • Claim a bug is fixed after editing code, without re-running the failing scenario
  • Rely on a previous successful build instead of re-building after changes

The verification-before-completion skill defines what the repository calls an Iron Law:

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

By adopting this skill, you turn that law into a concrete workflow rule for yourself, your team, or your agents. This reduces:

  • False "green" claims in pull requests
  • Hidden regressions that were never tested
  • Miscommunication between developers, reviewers, and automation

When is this skill a good fit?

Choose verification-before-completion when:

  • You already have test, lint, or build commands available and want to be sure they are always run
  • You use agents or scripts to help with development tasks and need them to be strict about verification
  • You care about reliable status reporting more than shaving a few seconds off your workflow

It may be less useful if:

  • Your project has no meaningful automated checks yet (no tests, no lints, no build commands)
  • You are doing exploratory work where you are not yet ready to make "passing" or "fixed" claims
  • You are only using the repository as a conceptual reference, not as an enforced workflow

In those cases, you can still use the skill as a guide for designing future tests and checks.

How to Use

Installation and setup

To install the verification-before-completion skill via npx:

npx skills add https://github.com/obra/superpowers --skill verification-before-completion

After installation:

  1. Open the skills/verification-before-completion directory in the obra/superpowers repository.
  2. Start with SKILL.md to see the full rule and its rationale.
  3. Integrate the rule into your own project’s documentation, agent configuration, or development guidelines.

You do not need to copy the repository structure exactly. Instead, use it as a reference for how to describe and enforce the rule in your environment.

Core workflow: the Gate Function

The skill defines a Gate Function that must run before any completion claim. In your day-to-day work, apply it like this:

BEFORE claiming any status or expressing satisfaction:

1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
   - If NO: State actual status with evidence
   - If YES: State claim WITH evidence
5. ONLY THEN: Make the claim

Skip any step and you are no longer following verification-before-completion.

Examples of typical commands:

  • Tests: npm test, pytest, go test ./..., mvn test
  • Lint: eslint ., flake8, golangci-lint run
  • Build: npm run build, make, cargo build --release
  • Targeted bug verification: the specific script, test, or manual check that reproduces the original issue

Example: using the skill in a development workflow

Scenario: You have updated code and want to claim "All tests pass".

Apply verification-before-completion:

  1. IDENTIFY the command: for example, pytest.
  2. RUN it after your changes:
    pytest
    
  3. READ the output and verify exit code 0.
  4. VERIFY:
    • If tests failed, do not claim success. Instead, report something like: "Tests are failing: 3 tests failing in test_user_flow.py. See pytest output."
    • If tests passed, you may claim: "All tests pass (pytest, exit code 0)."
  5. ONLY THEN mark the task as complete, push commits, or open a pull request.

You can apply this pattern to any status claim: builds, linters, formatting, or bug fixes.

Integrating with agents and automation

If you are using agents or scripts that assist with development:

  • Configure them so that any claim about tests, builds, or fixes is preceded by a concrete command run plus a summary of output.
  • Require the agent to reference the command it ran and the result, for example:
    • "Ran npm test: exit code 0, 0 failing tests."
    • "Ran npm run build: exit code 1, build failed. Not claiming completion."

In reviews or CI pipelines, you can treat any claim without evidence as incomplete according to verification-before-completion.

Adapting to your tools and environment

The repository does not prescribe a specific language or framework. To adapt the skill:

  • Map each common claim to a single, unambiguous command that proves it.
  • Document those mappings in your own repo (for example in CONTRIBUTING.md or a WORKFLOW.md).
  • Encourage or require contributors and agents to always:
    • Run those commands before saying "done"
    • Paste or summarize relevant output when making claims

Examples of claim-to-command mappings:

  • "Backend tests pass" → pytest backend/tests
  • "Frontend builds successfully" → npm run build in frontend/
  • "Go module is clean" → go test ./... and golangci-lint run

FAQ

What is the main rule of verification-before-completion?

The main rule is the "Iron Law":

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

If you have not just run the relevant verification command and inspected its output, you cannot honestly claim success.

What counts as "verification evidence"?

Verification evidence is the fresh output from a command that directly tests your claim, such as:

  • A test suite run that shows 0 failures and exit code 0
  • A linter run that reports no errors and a successful exit status
  • A build command that completes successfully (exit 0)
  • A reproduction script or test for a bug that now passes

Old results, assumptions, or "it should work" do not count as evidence under this skill.

Can I rely on previous test runs if nothing changed?

Under verification-before-completion, the default is no. The skill emphasizes fresh verification before each new claim of completion. If you want to rely on previous runs, you should be explicit and careful about the conditions under which that is acceptable, and recognize that it weakens the guarantee.

Does this skill require specific tools or languages?

No. The verification-before-completion skill is tool-agnostic. It works with any stack where you can:

  • Define commands that verify behavior (tests, linters, builds, scripts)
  • Run them on demand
  • Interpret their exit codes and outputs

You simply fill in the commands relevant to your project and follow the Gate Function steps.

How is this different from just "running tests" sometimes?

The difference is discipline and consistency:

  • You run the verification command every time before claiming completion.
  • You always read the output instead of assuming success.
  • You treat any claim without evidence as invalid.

This turns test and build runs into a formal gate, not an optional extra.

Is verification-before-completion suitable for manual testing?

Yes, as long as you can define a clear procedure that acts like a "command" for the claim. For example:

  • Document a step-by-step manual test that reproduces a bug
  • Run it after your change
  • Record the outcome as the evidence

However, the skill works best when verification is automated via scripts or test frameworks, so results are repeatable and easy to re-run.

Where can I see the original skill definition?

The authoritative description for the verification-before-completion skill lives in the SKILL.md file in the obrа/superpowers repository:

  • Repository: https://github.com/obra/superpowers
  • Skill file: skills/verification-before-completion/SKILL.md

Refer to that file for the exact wording of the principle, the Iron Law, the Gate Function, and examples of common failures to avoid.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...