receiving-code-review
by obrareceiving-code-review helps you verify PR feedback before editing code. Use it to restate review comments, check them against the codebase, ask for clarification on unclear items, and push back when suggestions do not fit.
This skill scores 78/100, which makes it a solid directory listing: users can quickly tell when to invoke it and agents get a practical review-response workflow that is more disciplined than a generic 'handle code review' prompt. Directory users should still expect a text-only skill with limited implementation scaffolding and some opinionated communication norms.
- Very triggerable: the frontmatter clearly says to use it when receiving code review feedback, especially when comments are unclear or questionable.
- Operationally useful: it gives a concrete step sequence (read, understand, verify, evaluate, respond, implement) plus explicit 'never/instead' response rules.
- Good real-world leverage over a generic prompt: it emphasizes verification against the actual codebase, clarification before partial implementation, and reasoned technical pushback when feedback is wrong.
- The guidance is mostly prose with no support files, checklists, or automations, so execution still relies on the agent interpreting the document correctly.
- Some instructions are context-specific and opinionated (for example calling certain acknowledgments a 'CLAUDE.md violation'), which may not map cleanly to every team's review culture.
Overview of receiving-code-review skill
What the receiving-code-review skill is for
The receiving-code-review skill helps an AI respond to code review feedback with technical discipline instead of automatic agreement. Its job is simple but valuable: when a reviewer suggests changes, the agent should first understand the request, verify it against the actual codebase, and only then implement or push back.
This is most useful when review comments are ambiguous, partially wrong, conflicting, or risky to apply blindly. The receiving-code-review skill is not about sounding polite in a PR thread. It is about avoiding bad changes caused by social reflexes like “you’re right, I’ll fix that” before the issue is validated.
Best-fit users
This skill is a strong fit for:
- developers using AI to help process PR comments
- teams that want less performative agreement and more technical accuracy
- agents working in mature codebases where reviewer suggestions may not fit local constraints
- users who want a repeatable review-response workflow, not just a nicer prompt
If your main problem is “the model implements reviewer feedback too quickly and breaks things,” this skill directly targets that failure mode.
The real job to be done
Users usually do not need help reading a comment. They need help deciding:
- what the reviewer actually means
- whether the suggestion is correct for this repository
- whether clarification is required before changing anything
- how to respond if the reviewer is mistaken or incomplete
- how to implement safely if the feedback is valid
That is the practical value of the receiving-code-review skill: it turns review intake into a verification workflow.
What makes this skill different from a generic review prompt
A normal prompt often pushes the model toward compliance: summarize the feedback, thank the reviewer, and start editing. This skill instead enforces a response pattern:
- read the full feedback
- restate the requirement
- verify against codebase reality
- evaluate technical fit
- respond with acknowledgment, questions, or reasoned pushback
- implement one item at a time
It also explicitly forbids common low-value responses like exaggerated praise or immediate commitment before validation.
What to know before installing
This is a lightweight skill with a narrow purpose. It does not ship with helper scripts, references, or repository-specific rules. That is good if you want something easy to adopt, but it also means output quality depends heavily on the review comment and code context you provide.
Read this skill as a behavioral guardrail for Code Review workflows, not a full code review automation framework.
How to Use receiving-code-review skill
How to install the receiving-code-review skill
Install from the obra/superpowers repository:
npx skills add https://github.com/obra/superpowers --skill receiving-code-review
If your environment uses a different skill loader, add the skill from:
https://github.com/obra/superpowers/tree/main/skills/receiving-code-review
Because the repository only exposes SKILL.md for this skill, installation and inspection are straightforward.
What to read first in the repository
For this receiving-code-review install decision, start with:
skills/receiving-code-review/SKILL.md
That file contains nearly all of the useful behavior:
- the core principle
- the response pattern
- forbidden responses
- the rule for handling unclear feedback
There are no extra rules/, resources/, or support scripts to learn first, so time-to-understanding is low.
When to invoke receiving-code-review in practice
Use the receiving-code-review skill at the moment feedback arrives, before code changes begin.
Good trigger cases:
- “Please simplify this logic.”
- “This should use a transaction.”
- “Fix comments 1–6.”
- “Why not just cache this?”
- “Use pattern X instead.”
Especially invoke it when:
- comments are batched
- some items are unclear
- the reviewer may not know local architectural constraints
- the AI is tempted to start patching immediately
What input the skill needs
The skill performs best when you provide four things:
- the exact review feedback
- the affected file paths or diff
- relevant constraints in the codebase
- your desired next step
Useful inputs include:
- PR comments copied verbatim
- the current implementation
- test failures or performance constraints
- architecture rules that may conflict with the suggestion
- whether you want a response draft, an evaluation, or implementation help
Without codebase context, the skill can still help interpret comments, but it cannot verify technical fit well.
Turn a rough goal into a strong prompt
Weak prompt:
Handle this review feedback.
Stronger prompt:
Use the receiving-code-review skill.
Review feedback:
"Please combine these queries and move validation into the controller."
Context:
- Files: app/services/order_sync.rb, app/controllers/orders_controller.rb
- Current pattern: validation is intentionally kept out of controllers
- This code handles retry behavior and partial failures
- I want to know whether the suggestion is correct for this codebase
Please:
1. Restate what the reviewer is asking
2. Identify any ambiguity
3. Verify the suggestion against the current design
4. Recommend whether to implement, ask a question, or push back
5. If valid, propose the smallest safe change
This works better because it gives the skill enough material to perform the key step: verification, not just paraphrasing.
A practical receiving-code-review workflow
A reliable receiving-code-review usage flow looks like this:
- paste the full review thread or comment batch
- ask the agent to separate clear vs unclear items
- require restatement of each requested change
- have it check each item against the codebase
- ask for a recommendation: implement, clarify, or disagree
- only then move into edits, one issue at a time
This prevents the common mistake of partially implementing a batch of comments while still misunderstanding the rest.
How to handle unclear or bundled comments
One of the strongest ideas in the skill is: if any item is unclear, stop before implementing.
That matters in real PRs because comments are often interdependent. If a reviewer says “Fix points 1–6” and points 4–5 are unclear, implementing 1–3 and 6 can lock you into the wrong direction.
A strong prompt here is:
Use receiving-code-review. Group this feedback into:
- clear and ready to verify
- unclear and needs clarification
Do not recommend implementation until all linked items are understood.
What good output from this skill should look like
A good result is not “Thanks, great catch.” It should include:
- a clean restatement of the reviewer’s technical request
- any assumptions or ambiguities
- verification against the actual code
- a decision with reasoning
- either a clarification question, a respectful pushback, or a safe implementation plan
If the output jumps directly from comment to code change, the skill is not being used correctly.
Suggested prompt patterns for Code Review work
Use these patterns depending on your goal.
For evaluation only:
Use the receiving-code-review skill to evaluate whether this review comment is technically correct for this repository. Do not implement yet.
For drafting a response:
Use the receiving-code-review skill to write a concise technical reply to this PR comment. Avoid praise language. Ask for clarification if needed.
For implementation after validation:
Use the receiving-code-review skill. First verify the reviewer's request against the codebase. If valid, propose the smallest testable change and list risks before editing.
Practical tips that improve output quality
Small input upgrades make a big difference:
- include the reviewer’s exact wording, not your paraphrase
- include neighboring code, not only the target function
- say whether the reviewer is asking for style, correctness, performance, or architecture
- tell the model what is non-negotiable in your codebase
- ask it to identify where the reviewer may be assuming facts not in evidence
This skill gets better when the agent can compare the requested change to repository reality.
receiving-code-review skill FAQ
Is receiving-code-review only for replying to humans in PR comments?
No. It is equally useful for internal evaluation before you respond at all. In many cases the best first use of receiving-code-review is private: determine whether the comment is correct, incomplete, or unsafe before drafting any public response.
Is this skill beginner-friendly?
Yes, with one caveat. The workflow is easy to understand, but beginners may still lack the technical context needed to judge whether feedback fits the codebase. The skill improves discipline; it does not replace engineering judgment.
How is this different from ordinary “analyze this PR feedback” prompts?
The key difference is the built-in refusal to perform instant agreement. The receiving-code-review guide in SKILL.md centers verification, clarification, and justified pushback. Generic prompts often optimize for smooth social language rather than correctness.
When should I not use the receiving-code-review skill?
Skip it when the task is already fully specified and mechanically safe, such as:
- applying an obvious typo fix
- renaming a symbol with no behavioral impact
- following a confirmed team convention already documented
It is also not the right tool for performing a full outbound code review of someone else’s code. This skill is specifically about receiving feedback well.
Can receiving-code-review help when the reviewer is wrong?
Yes. In fact, that is one of its strongest use cases. The skill encourages a reasoned technical response instead of defensive tone or automatic compliance. If the reviewer’s suggestion conflicts with the codebase, you should expect the agent to explain why and propose a clearer reply.
Does it support batched review feedback?
Yes, but only if you provide the full batch and let the skill separate clear from unclear items. The skill strongly implies that partial understanding should block implementation. That is especially useful for “fix all of these comments” situations.
How to Improve receiving-code-review skill
Give the skill repository reality, not just reviewer text
The fastest way to improve receiving-code-review output is to pair comments with:
- file paths
- current implementation snippets
- constraints
- tests
- relevant architecture patterns
A review suggestion cannot be evaluated in the abstract if it depends on local design decisions.
Ask for a decision, not just a summary
Many weak runs happen because the prompt only asks the agent to “process” feedback. Better prompts force a choice:
- implement
- ask a clarifying question
- push back with reasoning
- defer pending missing context
This makes the skill operational instead of descriptive.
Prevent the most common failure mode
The biggest failure mode is premature implementation. The model sees a reviewer suggestion and starts editing before checking:
- whether the reviewer understood the current code
- whether constraints make the suggestion invalid
- whether other comments change the meaning
- whether the requested change has hidden side effects
Explicitly tell the agent: “Do not implement until verification is complete.”
Provide stronger inputs for ambiguous review threads
If the feedback is vague, add the missing frame yourself:
Use receiving-code-review.
The reviewer says: "This flow should be simplified."
Please identify at least 3 plausible interpretations, map each to the current code, and tell me what clarification question would unblock implementation safely.
That is much better than asking the model to guess one meaning and proceed.
Make pushback technical and concise
If the reviewer is wrong, the best output is short, specific, and evidence-based. Ask for:
- the assumption the reviewer appears to be making
- the codebase fact that conflicts with that assumption
- the smallest respectful response that explains the mismatch
This keeps the receiving-code-review for Code Review workflow useful instead of confrontational.
Iterate after the first answer
After the first run, improve the result by supplying whichever piece is still missing:
- unclear reviewer intent
- missing code snippet
- architectural constraint
- test evidence
- diff context
A good second-pass prompt is:
Re-run receiving-code-review with this added context. Update your recommendation and tell me whether the original review comment is now clear enough to implement.
Pair the skill with implementation only after validation
The best adoption pattern is two-stage:
- use
receiving-code-reviewto evaluate the comment - only then ask for code edits
This separation improves trust because you can inspect the reasoning before the model starts changing files.
Use the skill as a team standard
If your team wants consistent AI behavior in PR workflows, turn the skill into a default instruction for inbound review handling:
- no praise-first responses
- no blind implementation
- ask when unclear
- verify against local code
- push back when technically justified
That is where the receiving-code-review skill has the most lasting value: not as a one-off prompt, but as a repeatable operating habit.
