O

requesting-code-review

by obra

requesting-code-review is a lightweight workflow for dispatching the superpowers:code-reviewer subagent with a clean git diff, requirements, and change summary so reviews happen at the right time and produce actionable, severity-ranked feedback before merge.

Stars121.8k
Favorites0
Comments0
AddedMar 29, 2026
CategoryCode Review
Install Command
npx skills add obra/superpowers --skill requesting-code-review
Curation Score

This skill scores 78/100, which means it is a solid directory listing candidate for users who want a repeatable code-review handoff rather than an ad hoc prompt. The repository gives enough real workflow detail for an agent to trigger and use it with reasonable confidence, though adopters should expect some repo-specific assumptions and limited setup guidance.

78/100
Strengths
  • Strong triggerability: the description and "When to Request Review" section clearly define when agents should invoke it.
  • Operationally useful workflow: it tells the agent to gather SHAs, dispatch a reviewer with a template, and act on feedback by severity.
  • Good leverage over a generic prompt: `code-reviewer.md` provides a structured review checklist and output format tied to a git diff range.
Cautions
  • It depends on a separate `superpowers:code-reviewer` subagent workflow and assumes the Task tool exists, which may limit portability outside this repo's conventions.
  • Setup guidance is thin: there is no install command and the skill gives little help for cases like choosing the right base SHA or reviewing non-commit-based work.
Overview

Overview of requesting-code-review skill

The requesting-code-review skill is a lightweight workflow for triggering a focused code review at the right time, with the right diff, and with enough implementation context for a reviewer agent to give useful feedback. Instead of asking for a vague “please review my code,” it pushes you to pass a commit range, a summary of what changed, and the intended requirements.

What the requesting-code-review skill actually does

At its core, requesting-code-review tells you to dispatch the superpowers:code-reviewer subagent using a prepared template in code-reviewer.md. The differentiator is not fancy automation; it is review framing. The reviewer sees the work product and plan, not your whole session history, which keeps the review narrower and easier to act on.

Who should install requesting-code-review

This skill is best for developers and AI-agent users who:

  • work in commit-based workflows
  • ship features in steps
  • want a repeatable “review before proceeding” checkpoint
  • use subagents and need a cleaner handoff than a generic prompt

It is especially useful if you tend to ask for review too late, after multiple tasks have piled up into one large diff.

The real job-to-be-done

Users do not install requesting-code-review just to “get a review.” They install it to reduce avoidable rework:

  • catch issues before merge
  • validate against the original plan
  • get severity-ranked feedback
  • preserve main-task context while a reviewer agent inspects the code separately

Why this skill is more useful than a plain review prompt

The requesting-code-review skill adds practical structure that many ad hoc prompts miss:

  • review timing guidance: after each task, after major features, before merge
  • explicit BASE_SHA and HEAD_SHA inputs
  • a review template with code quality, architecture, testing, requirements, and production-readiness checks
  • severity buckets that make follow-up easier

That makes the output more actionable than “scan my latest changes.”

What matters most before adopting it

The biggest adoption question is fit: this skill works best when your work is represented as a clean git range and when you can briefly describe the intended behavior. If your branch is messy, your plan is unclear, or your changes are mixed with unrelated edits, the review quality will drop.

Important limitation to know up front

requesting-code-review for Code Review is not a full review system by itself. It does not contain scripts, enforcement rules, or repository-specific checkers. It is a disciplined prompting and handoff pattern. That is valuable, but you should expect quality to depend heavily on the commit range and the clarity of your requirements.

How to Use requesting-code-review skill

Install requesting-code-review in your skills setup

If you are using the Skills CLI pattern used across the repository, install it with:

npx skills add https://github.com/obra/superpowers --skill requesting-code-review

If your environment already has the obra/superpowers collection available, just enable or reference the requesting-code-review skill from that pack.

Read these files first

For a fast evaluation, start with:

  1. skills/requesting-code-review/SKILL.md
  2. skills/requesting-code-review/code-reviewer.md

SKILL.md explains when to invoke review. code-reviewer.md is the more important file if you care about output quality, because it shows exactly what the reviewer is instructed to evaluate.

Understand the intended trigger points

The skill is designed to be used:

  • after each task in subagent-driven development
  • after a major feature
  • before merging to main

Optional but high-value moments include:

  • when you are stuck and want a fresh perspective
  • before a risky refactor
  • after fixing a complex bug

If you only use it at the very end of a large branch, you lose much of its benefit.

Gather the minimum inputs before calling it

The skill works best when you provide:

  • what was implemented
  • the plan or requirements
  • BASE_SHA
  • HEAD_SHA
  • a brief description of the change

Typical git commands:

BASE_SHA=$(git rev-parse HEAD~1)
HEAD_SHA=$(git rev-parse HEAD)

For feature branches, origin/main may be a better base than HEAD~1 if you want a fuller review window.

Use a clean diff range, not a vague “latest work” request

This is the highest-leverage part of the requesting-code-review usage pattern. A review tied to BASE_SHA..HEAD_SHA is far better than asking an agent to infer what changed from your working tree or chat history.

Good:

  • “Review commits from feature start to current head against the signup flow requirements.”

Weak:

  • “Can you review my recent auth changes?”

The stronger version narrows scope and reduces reviewer guesswork.

Turn a rough goal into a strong review request

A rough request like this is too thin:

Please review my new feature.

A stronger request based on the skill looks like this:

Review the password reset implementation for production readiness.

What was implemented:
- Added reset token generation and validation
- Added reset email endpoint
- Added UI flow for requesting and completing reset

Plan/requirements:
- Tokens expire after 30 minutes
- Single-use tokens only
- No user enumeration from the request endpoint
- Existing login flow must remain unchanged

Base SHA: abc1234
Head SHA: def5678
Description:
Task 2 of auth hardening. Main changes are in API handlers, email service, and reset form.

This gives the reviewer enough context to judge correctness, not just style.

Dispatch the reviewer subagent the way the skill expects

The repository guidance says to use the Task tool with the superpowers:code-reviewer type and fill the template in code-reviewer.md. That template asks the reviewer to:

  • compare implementation vs plan
  • inspect the git diff
  • check quality, architecture, testing, and production readiness
  • return findings by severity

If your agent platform supports subagents, keep the review isolated instead of mixing it into the same working conversation.

What the reviewer template is optimized to catch

The built-in checklist is strongest at surfacing:

  • missing requirements
  • obvious production-readiness gaps
  • test coverage problems
  • architectural concerns
  • dangerous edge cases
  • backward-compatibility or migration omissions

It is less specialized for domain-specific compliance, repo-specific conventions, or deep runtime verification unless you add those explicitly.

Suggested workflow for real projects

A practical requesting-code-review guide looks like this:

  1. finish one bounded task
  2. identify the exact diff range
  3. summarize intent and acceptance criteria
  4. dispatch the reviewer with the template
  5. fix critical and important issues
  6. re-run review if the fixes are substantial
  7. continue development or merge

This skill is most effective as a checkpoint between implementation steps, not just as a final gate.

Tips that materially improve output quality

To get better review output:

  • use a diff range that contains one logical change
  • include acceptance criteria, not just a feature name
  • mention risky areas like migrations, auth, concurrency, caching, or API contracts
  • note whether tests were added and what types
  • say if breaking changes are expected or forbidden

These details help the reviewer distinguish intentional tradeoffs from accidental omissions.

Common misuse that lowers value

The requesting-code-review install decision makes less sense if your team routinely:

  • commits many unrelated changes into one range
  • lacks written requirements
  • uses no meaningful git boundaries
  • expects the skill to replace human approval or CI

In those cases, clean up the workflow first or expect noisier reviews.

requesting-code-review skill FAQ

Is requesting-code-review good for beginners?

Yes, if you already understand basic git concepts like commits and SHAs. The skill is simple, but it assumes you can define what changed and what it was supposed to do. Beginners who skip that context will still get feedback, just less reliable feedback.

Does this skill review uncommitted changes?

Not by design. The workflow is built around BASE_SHA and HEAD_SHA, so it is strongest on committed work. You can adapt it for unstaged or uncommitted changes, but that moves away from the skill’s core structure and usually makes the review less reproducible.

How is requesting-code-review different from asking an AI to review my code?

A normal prompt often produces a generic review because the model has to infer scope, intent, and acceptance criteria. requesting-code-review improves this by requiring:

  • an explicit diff
  • a clear implementation summary
  • the original plan or requirements
  • a severity-based output format

The result is usually easier to trust and easier to act on.

When should I not use requesting-code-review?

Skip it when:

  • your change is too incomplete to evaluate
  • the diff mixes several unrelated features
  • you do not yet know the expected behavior
  • you need repo-specific static checks more than judgment-based review

It is also a poor fit if your team never works from git commit ranges.

Does it replace human code review?

No. The best use is as a pre-review or between-step quality gate. It can catch issues early and make later human review smoother, but it does not replace domain expertise, team conventions, or organizational approval requirements.

Is requesting-code-review only for large features?

No. In fact, smaller diffs are where it shines. The skill explicitly encourages early and frequent review, which is often more effective than waiting for one large final pass.

What ecosystem fit should I expect?

This skill fits best inside the obra/superpowers workflow, especially if you already use subagents. It is lighter than a full review framework and easier to adopt than building custom review automation, but that also means fewer guardrails.

How to Improve requesting-code-review skill

Give the reviewer better requirements, not just better code

The most common failure mode is weak plan context. If you only say “implemented notifications,” the reviewer cannot tell whether a missing retry path is a bug or out of scope. Add concrete expectations:

  • trigger conditions
  • error behavior
  • backward-compatibility expectations
  • performance or security requirements

Better requirements produce better review judgments.

Use the smallest meaningful review slice

The requesting-code-review skill performs best on a single task or tightly related change set. If the diff includes schema work, API changes, UI updates, and unrelated cleanup, findings become broad and less actionable. Split work into reviewable units whenever possible.

Choose the right base commit

A bad BASE_SHA causes misleading feedback. If you use HEAD~1 but the feature spans six commits, the reviewer sees too little. If you use a very old base, the reviewer sees too much noise. Pick the base that matches the logical unit of work you want judged.

Replace placeholders with specifics the reviewer can test mentally

The included template uses placeholders such as:

  • {WHAT_WAS_IMPLEMENTED}
  • {PLAN_OR_REQUIREMENTS}
  • {BASE_SHA}
  • {HEAD_SHA}
  • {DESCRIPTION}

Do not fill those with one-line summaries if the change has risk. State the actual behavior expected. For example, “prevents user enumeration and invalidates token after first successful reset” is much stronger than “added password reset.”

Tell the reviewer where the risk is

If you know the risky surfaces, say so:

  • “Please pay special attention to race conditions around token reuse.”
  • “Check backward compatibility for existing API consumers.”
  • “Focus on whether tests cover the error path and expiry boundary.”

This narrows attention and increases the odds of useful findings.

Strengthen the review after the first pass

After the initial output:

  1. fix the clearly correct critical issues
  2. challenge findings that seem wrong
  3. clarify missing requirements
  4. run a second review on the updated diff if changes are substantial

The skill itself encourages pushback when the reviewer is wrong. That is a good sign: it is meant to support judgment, not replace it.

Add repo-specific review criteria when needed

The stock code-reviewer.md covers common review dimensions well, but many teams need more. Improve requesting-code-review for Code Review by adding project-specific checks such as:

  • migration rollout rules
  • observability requirements
  • accessibility expectations
  • security review points
  • language or framework conventions

This is the biggest upgrade if you want less generic output.

Watch for these recurring failure modes

Common quality drops usually come from:

  • missing or vague requirements
  • noisy commit ranges
  • no mention of expected nonfunctional behavior
  • asking for review after too many tasks have accumulated
  • treating minor suggestions as mandatory while missing critical design issues

If the output feels shallow, check the inputs first.

Improve the output by asking for decisions, not only defects

A stronger requesting-code-review usage pattern is to ask the reviewer to judge tradeoffs too. Example:

  • “Flag any unnecessary complexity.”
  • “Call out if this should be split before merge.”
  • “Assess whether current tests justify production readiness.”

That pushes the review beyond lint-like comments toward release-quality evaluation.

Practical way to evolve the skill in your own setup

If you adopt this skill seriously, customize three things first:

  1. your preferred base-commit selection rule
  2. a standard format for requirements and acceptance criteria
  3. extra checklist items for your stack and release process

Those additions preserve the simplicity of requesting-code-review while making it much more useful in day-to-day delivery.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...