receiving-code-review
by obraA focused skill for handling GitHub code review feedback with technical rigor: read, verify against the codebase, clarify unclear requests, and respond without performative agreement or blind implementation.
Overview
What the receiving-code-review skill does
The receiving-code-review skill defines a clear, repeatable pattern for how an agent should respond when it receives code review feedback, especially on GitHub pull requests. It focuses on technical evaluation over social performance.
Instead of blindly agreeing with reviewers or immediately implementing suggestions, this skill trains the agent to:
- Read all feedback before reacting
- Restate or clarify the requested change
- Verify the feedback against the actual codebase
- Evaluate whether the suggestion is technically correct for this repository
- Respond with grounded technical reasoning, not flattery
- Implement changes one by one, only after understanding and verification
Who this skill is for
Use receiving-code-review if you:
- Work with an AI assistant on GitHub PRs or code review discussions
- Want the assistant to act like a thoughtful reviewer or reviewee, not a yes-person
- Need help interpreting review comments and deciding what to do next
- Care about the correctness and safety of changes more than fast but shallow agreement
It is particularly useful for:
- Developers collaborating on feature branches and pull requests
- Tech leads who want consistent handling of review feedback
- Teams experimenting with Claude or other LLMs as PR review partners
When this skill is not a good fit
This skill is not designed for:
- Generating new features from scratch
- Large-scale refactors without review context
- Social niceties, praise, or status updates
If you mainly want code generation, documentation drafting, or high-level design help, pair this with other skills. Use receiving-code-review specifically when the agent is in the loop of receiving and responding to code review feedback.
Key benefits
With receiving-code-review installed, your agent will:
- Avoid performative replies like "You're absolutely right!" or "Great point!"
- Base responses on the real code, not assumptions
- Ask for clarification instead of guessing when feedback is unclear
- Push back respectfully when a suggestion is technically incorrect
- Reduce the risk of implementing misunderstood or harmful changes
This makes it easier to trust the agent in your code-review, git-workflows, and pr-review processes.
How to Use
1. Installation
To install the receiving-code-review skill from the obra/superpowers repository, run:
npx skills add https://github.com/obra/superpowers --skill receiving-code-review
This pulls the skill definition (including SKILL.md) into your agent skill environment. Installation assumes you already have the npx skills tooling available; if not, set that up first according to your platform or agent host instructions.
2. Files to review after install
After installation, inspect the core file for this skill:
skills/receiving-code-review/SKILL.md– canonical description of the behavior pattern when receiving code review feedback.
In the broader obra/superpowers repo you may see shared patterns like:
README.md,AGENTS.md, ormetadata.jsonat the root – general context for how skills are structured and used
These are helpful for understanding how receiving-code-review fits into a larger Claude/agent ruleset, but the operational heart of this skill is in SKILL.md.
3. Core response workflow
The skill defines a specific response pattern whenever the agent receives code review feedback (for example, on a GitHub PR comment thread):
1. READ: Consume all feedback before reacting
2. UNDERSTAND: Restate the requirement in its own words, or ask
3. VERIFY: Check the feedback against the real codebase
4. EVALUATE: Decide if it is technically sound for THIS repo
5. RESPOND: Give a technical acknowledgment or reasoned pushback
6. IMPLEMENT: Change one item at a time and test each
In practice, this means:
- The agent should not immediately say it will implement a suggestion.
- It first ensures it understands what the reviewer wants.
- It inspects the relevant files/lines or repository state.
- Only then does it decide whether to apply, modify, or reject the suggestion.
This pattern is especially useful for GitHub pull request review scenarios, where context and correctness matter more than speed.
4. Forbidden and discouraged responses
The skill explicitly outlaws certain kinds of responses that are common in LLMs but harmful in serious code review:
Forbidden examples:
"You're absolutely right!"(explicitly identified as a violation of the broader CLAUDE rules)"Great point!"/"Excellent feedback!"and similar praise-only responses"Let me implement that now"when the agent has not yet verified the suggestion
Instead, when using receiving-code-review, the agent should:
- Restate the technical requirement: e.g. "You are asking to extract this logic into a separate function to avoid duplication."
- Ask targeted questions when something is unclear
- Provide technical reasoning when it believes the suggestion is incorrect or incomplete
- Move toward actual changes without over-explaining or praising
This keeps the conversation focused on code quality, not flattery.
5. Handling unclear or partial feedback
The skill defines a strict rule for ambiguous feedback:
IF any item in the feedback is unclear:
STOP – do not implement anything yet
ASK for clarification on the unclear items
The rationale: individual review items may be related, so implementing the ones you "think" you understand while others remain ambiguous can lead to:
- Conflicting changes
- Broken workflows
- Misaligned behavior relative to the reviewer’s intent
For example, if a reviewer says "Fix 1–6" and the agent only understands items 1, 2, 3, and 6, receiving-code-review guides it to:
- Pause implementation
- Ask specific clarifying questions about items 4 and 5
- Only implement once the full set of requirements is understood
This behavior is critical in automated or semi-automated git workflows, where partial understanding can quickly turn into broken branches.
6. Integrating with your GitHub / PR workflow
To make the most of receiving-code-review in a real project:
-
Attach the skill to your agent used for:
- Reviewing pull requests
- Drafting responses to reviewer comments
- Helping triage or summarize review feedback
-
Ensure repository access for the agent so it can actually verify suggestions against:
- Current branch code
- Relevant files and modules
-
Combine with complementary skills for best results, such as:
- A coding or refactoring skill for implementing the agreed changes
- Repository navigation or search skills to quickly locate affected code
-
Educate your team that the agent will:
- Ask clarifying questions instead of guessing
- Sometimes push back on incorrect or risky suggestions
- Avoid generic praise in favor of specific technical responses
When integrated this way, receiving-code-review becomes a guardrail that keeps your AI collaborator disciplined and trustworthy in code review conversations.
7. When to activate this skill
Use receiving-code-review whenever your prompt or workflow indicates the agent is:
- Reading human or bot feedback on a pull request
- Going through inline comments on GitHub diff views
- Processing review notes in a code-review tool
You generally do not need it when:
- Generating the initial code or first draft of a feature
- Writing design documents or ADRs
- Performing non-review tasks like dependency upgrades
Activating the skill only in review contexts keeps your agent behavior predictable and focused.
FAQ
What problem does receiving-code-review solve?
The receiving-code-review skill solves the problem of shallow, performative AI responses to code review feedback. Instead of always agreeing and immediately changing code, the agent:
- Reads all feedback
- Verifies it against the existing codebase
- Clarifies ambiguous requests
- Pushes back with technical reasoning when necessary
This significantly reduces incorrect implementations and miscommunications in GitHub PRs and other code-review tools.
How do I install receiving-code-review?
Install the skill from the obra/superpowers repository using:
npx skills add https://github.com/obra/superpowers --skill receiving-code-review
After installation, review SKILL.md under the receiving-code-review skill directory to understand the exact behavior rules.
Does this skill change how code is written?
Indirectly. receiving-code-review does not generate code by itself, but it strongly influences how and when code changes are made by enforcing:
- Verification before implementation
- Item-by-item changes and testing
- Avoidance of partial, misunderstood fixes
Pair it with coding skills to handle the actual implementation once review feedback has been validated.
Can receiving-code-review push back on a human reviewer?
Yes. The skill explicitly allows and encourages reasoned, technical pushback when feedback is:
- Incorrect for the current codebase
- Based on outdated assumptions
- Likely to introduce bugs or regressions
The pushback must be grounded in concrete details from the repository, not opinions.
Is this skill only for GitHub?
The skill is written with GitHub-style PR review workflows in mind, but it applies to any environment where an agent receives structured code review feedback, including:
- Git-based code review tools
- Internal review dashboards
- Chat-based review sessions where comments reference specific files and lines
If your workflow resembles PR comments plus a git repository, receiving-code-review is a good fit.
How does this interact with CLAUDE or other agent rules?
In the obra/superpowers ecosystem, skills are layered with higher-level rules (often captured in files like CLAUDE.md). receiving-code-review references those expectations by forbidding responses like "You're absolutely right!" that violate the spirit of those rules.
Use it alongside your existing agent rules to:
- Enforce stricter review behavior
- Avoid social over-optimization
- Maintain consistency across different projects and repositories
What if my team prefers more polite responses?
You can still maintain professional tone, but this skill prioritizes clear technical communication over politeness formulas. If you need softer wording, you can:
- Add separate guidelines for tone in other skills
- Keep receiving-code-review as the backbone for verification and rigor
This separation lets you adjust style without weakening the core review discipline.
How do I know if this skill is working correctly?
Signs that receiving-code-review is active and effective include:
- The agent no longer replies with generic praise to review comments
- It restates requirements before acting
- It asks questions when feedback is incomplete or ambiguous
- It references specific files, functions, or lines when accepting or challenging suggestions
If you see immediate "I’ll implement that" answers with no verification, revisit your skill configuration and ensure this skill is enabled in review contexts.
