requesting-code-review
by obraUse when completing tasks, implementing major features, or before merging to verify work meets requirements
Overview
What this skill does
The requesting-code-review skill defines a clear, repeatable workflow for asking an AI code review subagent to inspect your changes before they land. It is designed for Git-based projects and helps you:
- Decide when to request a review (after tasks, features, and before merge)
- Package precise context for the reviewer using Git SHAs and requirements
- Keep the reviewer focused on the code diff, not your private session history
- Classify feedback by severity so you know what to fix now vs. later
At its core, requesting-code-review is about review early, review often so issues are caught before they cascade through your codebase.
Who it is for
This skill is a good fit if you:
- Work with Git and regularly create feature branches or PRs
- Want a structured, repeatable way to request AI-assisted code review
- Use subagent-driven development (e.g., a dedicated
code-revieweragent) - Care about production readiness: correctness, architecture, testing, and spec alignment
It is particularly useful for:
- Individual developers who want a reliable safety net
- Small teams without dedicated code reviewers
- Projects where commit history and diffs are the main source of truth
When it is not a good fit
requesting-code-review may not be ideal if:
- You are not using Git or do not have access to commit SHAs
- You want general code generation or refactoring help, not review of specific changes
- You cannot provide a clear plan, spec, or requirements for the changes being reviewed
In those cases, you may want a more general coding or planning skill rather than a review-focused workflow.
Problems it solves
Without a consistent review process, developers often:
- Forget to request review at key milestones
- Share too little context (or way too much session history)
- Receive feedback that is unstructured and hard to act on
The requesting-code-review skill solves these problems by:
- Defining mandatory and optional review checkpoints
- Standardizing the inputs to review (Git range, requirements, summary)
- Pairing with a dedicated
code-reviewersubagent that returns severity-ranked feedback
How to Use
Installation
To install the requesting-code-review skill from the obra/superpowers repository, use the Skills CLI:
npx skills add https://github.com/obra/superpowers --skill requesting-code-review
``
This pulls in the skill definition plus its supporting files, including the `code-reviewer` subagent template.
After installation, open the skill directory and skim these files first:
- `SKILL.md` – high-level description and workflow steps for requesting code review
- `code-reviewer.md` – the agent prompt/template that actually performs the code review
### Core workflow at a glance
The `requesting-code-review` workflow has three main phases:
1. **Decide when to review**
- After each task in a subagent-driven workflow
- After you complete a major feature
- Before merging to your main branch
2. **Prepare the review context**
- Capture the Git commit range with `BASE_SHA` and `HEAD_SHA`
- Summarize what was implemented and what it should do
- Fill the placeholders defined in `code-reviewer.md`
3. **Run the review and act on feedback**
- Dispatch the `superpowers:code-reviewer` subagent
- Fix Critical issues immediately, Important issues before proceeding, and log Minor issues
### Step 1: Identify the right review moment
According to `SKILL.md`, use `requesting-code-review`:
**Mandatory checkpoints:**
- **After each task** in subagent-driven development
- **After completing a major feature**
- **Before merging** to your main branch (e.g., `main`, `master`)
**Optional but valuable checkpoints:**
- When you are **stuck** and need a fresh perspective
- **Before refactoring**, as a baseline check of current behavior
- After fixing a **complex bug**, to ensure no regressions or new issues
Build the habit of triggering this skill at these moments so reviews become automatic instead of an afterthought.
### Step 2: Capture the Git commit range
The reviewer needs a clean, bounded diff. Use Git SHAs to specify the range:
```bash
BASE_SHA=$(git rev-parse HEAD~1) # or origin/main
HEAD_SHA=$(git rev-parse HEAD)
BASE_SHAshould point to the commit that represents your starting point, such as the previous commit or the tip oforigin/main.HEAD_SHAshould be the commit that includes your latest work.
You can adjust HEAD~1 or choose a different base to match your branch strategy, as long as the range accurately captures the changes you want reviewed.
Step 3: Prepare the review request template
The code-reviewer.md file defines how to talk to the Code Review Agent. It uses placeholders that you must fill before dispatching the subagent.
Key placeholders:
{WHAT_WAS_IMPLEMENTED}– A concise description of what you just built or changed{PLAN_OR_REQUIREMENTS}– The spec, ticket, or business requirements the implementation should satisfy{BASE_SHA}– The starting commit SHA for the diff{HEAD_SHA}– The ending commit SHA for the diff{DESCRIPTION}– A short summary of the change set (e.g., "Add verification function and tests for user signups"){PLAN_REFERENCE}– A reference back to your plan or requirements document
In your own tooling, populate these placeholders before you dispatch the superpowers:code-reviewer subagent with the Task tool.
Step 4: Dispatch the code-reviewer subagent
With your Git SHAs and template filled out:
-
Use your orchestration environment's Task tool (as described in the superpowers framework) with type
superpowers:code-reviewer. -
Pass the rendered
code-reviewer.mdcontent, with placeholders replaced by your real values. -
Ensure the agent can access the Git diff for the range:
git diff --stat {BASE_SHA}..{HEAD_SHA} git diff {BASE_SHA}..{HEAD_SHA}The template indicates these commands so the reviewer focuses strictly on the changes between
BASE_SHAandHEAD_SHA.
The design of this skill ensures the review agent sees only the work product (commits and diff), not your broader session history or unrelated context.
Step 5: Interpret and act on feedback
The code-reviewer.md template instructs the agent to:
- Assess code quality (separation of concerns, error handling, DRY, edge cases)
- Evaluate architecture (design, scalability, performance, security)
- Check testing (coverage, integration points, test quality)
- Verify requirements (spec alignment, no scope creep, documented breaking changes)
- Judge production readiness (migrations, backward compatibility, documentation)
Feedback is categorized into:
- Critical (Must Fix) – Bugs, security issues, data loss risks, broken functionality
- Important (Should Fix) – Architecture problems, missing features, poor error handling, test gaps
- Minor (Nice to Have) – Style, optimizations, documentation improvements
The corresponding workflow from SKILL.md is:
- Fix Critical issues immediately before continuing
- Fix Important issues before proceeding with further work or merging
- Note Minor issues for later cleanup or follow-up tasks
- Push back, with reasoning, if the reviewer is wrong or lacks context
By following this triage, you turn AI feedback into a concrete action list instead of a vague set of suggestions.
Example usage pattern
A typical use of requesting-code-review might look like this:
-
You complete "Task 2: Add verification function" in your feature branch.
-
You capture SHAs:
BASE_SHA=$(git rev-parse origin/main) HEAD_SHA=$(git rev-parse HEAD) -
You fill the
code-reviewer.mdplaceholders with:{WHAT_WAS_IMPLEMENTED}= "Verification function for user email flow"{PLAN_OR_REQUIREMENTS}= Link or summary of the ticket/requirements{BASE_SHA}and{HEAD_SHA}= The values from Git{DESCRIPTION}= "Implement email verification and add tests for edge cases"
-
You dispatch the
superpowers:code-reviewersubagent using your Task tool. -
You receive structured feedback grouped into Critical, Important, and Minor.
-
You address Critical and Important issues, then optionally run the process again before merging.
FAQ
Is requesting-code-review only for GitHub repositories?
No. The requesting-code-review skill is Git-based, not GitHub-specific. It relies on commit SHAs and git diff commands, so it works with any Git remote (GitHub, GitLab, Bitbucket, or self-hosted) as long as you can provide BASE_SHA and HEAD_SHA.
Do I need to share my entire development session history?
No. A key design principle of requesting-code-review is that the reviewer receives only tightly scoped context: what was implemented, the requirements, and the Git diff. Your general session history and internal thought process remain private, while the reviewer focuses on the actual code changes.
When should I trigger requesting-code-review in my workflow?
Use requesting-code-review:
- After each task in subagent-driven development
- After completing a major feature
- Before merging to your main branch
You can also trigger it when you are stuck, before large refactors, or after complex bug fixes, to catch hidden risks early.
How does this integrate with my existing pull request reviews?
requesting-code-review complements, rather than replaces, human PR reviews. You can:
- Run the skill before opening a PR to catch obvious issues
- Use it alongside human reviewers to improve coverage and consistency
- Apply its structured feedback categories (Critical/Important/Minor) to your PR comments
Because the skill is built around Git ranges, it fits naturally into PR-based workflows.
Can I customize the code-reviewer agent behavior?
Yes. The code-reviewer.md file is a template you can adapt:
- Adjust the checklists for code quality, architecture, testing, or security
- Add project-specific concerns (e.g., domain rules, performance budgets)
- Refine the output format if you want different severity levels or additional sections
Just keep the core structure (clear task, Git range, and severity categories) so the review remains focused and actionable.
What if the reviewer suggests something incorrect?
The skill explicitly encourages you to push back with reasoning when the reviewer is wrong or missing context. Treat the review as a strong suggestion, not an unquestionable verdict. Clarify constraints, explain trade-offs, or update your plan so future reviews are better aligned.
Does requesting-code-review generate tests or code changes for me?
No. This skill is about review, not generation. It helps you:
- Request targeted review of specific changes
- Receive structured feedback on quality, architecture, and testing
You remain responsible for implementing fixes and writing tests, although you can combine this skill with other coding or test-generation skills in your toolchain.
How do I get started quickly?
-
Install the skill:
npx skills add https://github.com/obra/superpowers --skill requesting-code-review -
Read
SKILL.mdto understand the review checkpoints. -
Review
code-reviewer.mdto see the agent's checklist and output format. -
Run your next task or feature, capture
BASE_SHAandHEAD_SHA, and dispatch thesuperpowers:code-reviewersubagent.
From there, refine the template and workflow so requesting-code-review matches your team's habits and quality bar.
