code-reviewer
by zhaono1The code-reviewer skill guides structured PR and diff reviews for correctness, security, performance, testing, and maintainability, using repository references and a checklist script to make Code Review more consistent and actionable.
This skill scores 78/100, which means it is a solid directory listing candidate: agents get clear activation cues, a concrete review workflow, and useful supporting references that make execution more reliable than a generic "review this code" prompt, though some adoption details are still thin.
- Strong triggerability: SKILL.md explicitly says to use it for code review, PR review, and "review this/check this code" requests.
- Operationally useful workflow: it gives phased steps for gathering changed files and diffs, then reviews across correctness, security, performance, testing, documentation, and maintainability.
- Good supporting substance: three reference docs and a review_checklist.py script add reusable checklists, patterns, and OWASP-oriented security guidance.
- Install/adoption clarity is limited: README only says it is part of the collection, and SKILL.md has no install command or standalone setup guidance.
- Some execution details remain generic: the review process references git diff against main...HEAD and broad checklists, but provides limited guidance for nonstandard base branches, large PRs, or repo-specific review output conventions.
Overview of code-reviewer skill
What the code-reviewer skill does
The code-reviewer skill is a structured PR and diff review workflow for Code Review tasks. Instead of relying on a one-line “review this code” prompt, it pushes the agent to gather the changed files, inspect the diff, understand local project patterns, and then review changes across concrete categories like correctness, security, performance, testing, and maintainability.
Who should install code-reviewer
This code-reviewer skill is best for developers, tech leads, and AI-assisted reviewers who want more consistency than a generic prompt usually gives. It is especially useful if you review pull requests regularly, want severity-based findings, or need a repeatable checklist that covers both logic issues and higher-risk security concerns.
The real job-to-be-done
Most users do not just want “feedback.” They want a review that can answer: what changed, what is risky, what should block merge, what can wait, and what evidence supports each point. The code-reviewer workflow is built around that job by separating context gathering from analysis, which reduces shallow comments based only on snippets.
What makes this skill different
The main differentiator is its review structure. The repository does not stop at a broad instruction to inspect code. It includes:
- a phased review process
- a severity-oriented output style
- focused references for checklist, coding patterns, and security review
- a helper script at
scripts/review_checklist.pyfor generating a review checklist from Git changes
That makes code-reviewer for Code Review more actionable than a plain prompt and easier to adapt to team review norms.
When code-reviewer is a strong fit
Use code-reviewer when you have:
- a branch diff against
main - a PR with multiple files or cross-cutting changes
- a need to flag merge blockers versus optional improvements
- security-sensitive changes like auth, input handling, or data access
- a codebase where existing patterns matter as much as abstract best practices
When it is a weaker fit
This skill is less useful when:
- there is no diff or file set to inspect
- you only want style nitpicks
- the task is architecture design rather than code review
- the repo context is unavailable, so pattern comparison cannot happen
- the request is actually for debugging, rewriting, or feature planning
How to Use code-reviewer skill
Install context for code-reviewer skill
The upstream SKILL.md does not publish a direct install command, but the skill lives in zhaono1/agent-playbook under skills/code-reviewer. If your skills runtime supports GitHub skill installs from a repository path or collection, install from that repository and select the code-reviewer skill.
A common pattern is:
npx skills add https://github.com/zhaono1/agent-playbook --skill code-reviewer
If your environment uses a different installer, the key detail is the skill slug: code-reviewer.
Read these files first before relying on it
For the fastest evaluation path, read:
skills/code-reviewer/SKILL.mdskills/code-reviewer/README.mdskills/code-reviewer/references/checklist.mdskills/code-reviewer/references/security.mdskills/code-reviewer/references/patterns.mdskills/code-reviewer/scripts/review_checklist.py
This order matters. SKILL.md tells you how the workflow activates, the references show what standards it applies, and the script reveals how the workflow expects to gather repo evidence.
What input code-reviewer needs to work well
The code-reviewer usage is strongest when you provide:
- the base branch, usually
main - the PR goal or linked ticket
- the changed files or full diff
- any risk areas you care about most
- framework or language context
- whether you want a quick pass or merge-blocking review
Without that, the review can still run, but it will lean generic.
How the skill gathers review context
The repository makes the expected review flow explicit:
- get changed files with
git diff main...HEAD --name-only - inspect commit history with
git log main...HEAD --oneline - inspect the actual diff with
git diff main...HEAD - read nearby docs and similar files for local conventions
That is important because many weak AI reviews skip context gathering and jump straight into abstract best practices. This skill is better when it first anchors on what actually changed.
A practical code-reviewer prompt template
Use a prompt closer to this:
Review this branch with the code-reviewer skill.
Base branch: main
Goal: add password reset flow for users
Priority areas: security, correctness, test gaps
Constraints: keep current API shape, do not request large refactors
Please classify findings by severity: critical, high, medium, low.
For each finding, cite the file, explain the risk, and suggest the smallest safe fix.
This is better than “review my code” because it gives the skill the branch target, business intent, review priorities, and feedback format.
Stronger inputs vs weaker inputs
Weak input:
Review this PR
Stronger input:
Use code-reviewer on the diff against main.
Focus on auth flows, input validation, and regression risk.
Check whether tests cover unhappy paths and whether any existing project patterns were broken.
Flag only issues that are actionable before merge unless clearly marked as low severity.
The stronger version materially improves output quality because it narrows the review scope, names risk areas, and tells the agent how opinionated to be.
Suggested workflow for real PR review
A practical code-reviewer guide looks like this:
- Gather changed files and diff.
- Read the PR description or ticket.
- Sample adjacent files to learn conventions.
- Run the review by category: correctness, security, performance, code quality, testing, documentation, maintainability.
- Group findings by severity.
- Ask for a second pass on the highest-risk files if the first review found serious issues.
This two-pass pattern works well because the first pass finds broad risks and the second pass improves precision.
Use the references to make reviews less generic
The support files are the biggest reason to choose this skill over an ordinary prompt:
references/checklist.mdkeeps the review systematicreferences/security.mdadds OWASP-oriented checksreferences/patterns.mdgives concrete good/bad implementation examples
If a review feels vague, tell the agent to explicitly apply one or more of these references while analyzing the diff.
Use the helper script when you want a review scaffold
The repository includes:
python scripts/review_checklist.py
This is useful if you want a machine-generated checklist from current Git state before asking the agent for narrative findings. It is a practical bridge between raw diff inspection and a full written review.
Output shape that works best in practice
Ask the skill to return:
- a short summary of what changed
- merge blockers first
- findings grouped by severity
- file-level references
- rationale, not just verdicts
- a final “safe to merge?” assessment with caveats
That output style matches the repository’s severity model and makes the review easier to use in real team workflows.
code-reviewer skill FAQ
Is code-reviewer better than a normal review prompt
Usually yes, if you have real repo context. The value of code-reviewer is not magic analysis depth by itself. It is the combination of activation cues, a phased workflow, checklist coverage, and reference material that pushes the review toward completeness and consistency.
Is code-reviewer beginner-friendly
Yes, with one caveat: beginners still need to supply context. The skill gives a strong structure, but it cannot infer requirements, intended behavior, or team conventions from nothing. New users will get better results if they include the PR goal and base branch up front.
Does code-reviewer only work for pull requests
No. The code-reviewer usage also fits local branch diffs, a set of changed files, or a folder-level review request like “review the code in src/auth/.” It is just strongest when there is a clear diff against a known base branch.
What kinds of issues does the code-reviewer skill look for
The repository evidence shows coverage for:
- correctness and edge cases
- security issues, including OWASP-style concerns
- performance problems like unnecessary queries or calls
- code quality and maintainability
- test gaps
- documentation gaps
That breadth makes it suitable for general PR review rather than only security review or style review.
When should I not use code-reviewer
Skip code-reviewer when the task is primarily:
- generating new code
- debugging a runtime failure
- large-scale architecture planning
- formatting or lint cleanup only
- reviewing code without access to the changed context
In those cases, a more specialized skill or a direct task-focused prompt will be a better fit.
Does it enforce one coding style
No. The repository encourages checking existing patterns in similar files before judging the change. That is a good sign for adoption because it reduces generic advice that conflicts with local conventions.
How to Improve code-reviewer skill
Give code-reviewer the intent behind the change
The biggest quality upgrade is to explain what the code is supposed to do. Review quality drops fast when the agent only sees implementation. Add the ticket summary, acceptance criteria, or a one-paragraph intent note so the skill can judge correctness instead of only style and syntax.
Narrow the highest-risk review areas
If you care most about auth, billing, migrations, or concurrency, say so. The skill already covers multiple categories, but targeted priorities improve depth where it matters. This is especially important for larger PRs where a broad review can become shallow.
Provide enough repo context to compare patterns
This repository explicitly points the reviewer to existing conventions. Help it by naming comparable files or modules:
Compare the new handler to the existing patterns in src/api/users/ and src/api/sessions/.
Prefer consistency with those files unless there is a clear bug.
This reduces false positives and makes suggestions more adoptable.
Ask for evidence-based findings only
A common failure mode in AI review is speculative criticism. Improve code-reviewer output by setting a rule like:
Only report an issue if you can point to a specific file change, missing case, or concrete risk. Avoid hypothetical style advice unless it affects maintainability or correctness.
This keeps the review high-signal.
Split large reviews into passes
For large PRs, do not ask for everything at once. Use staged passes:
- correctness and security
- performance and maintainability
- testing and documentation
This mirrors the skill’s category structure and usually produces better findings than a single overloaded request.
Request smaller-fix recommendations
If the first output is too abstract, ask the skill to rewrite findings as minimal safe fixes:
Revise the review. For each high or critical issue, suggest the smallest code change or test addition that would reduce the risk before merge.
That makes the review more actionable for busy teams.
Watch for common failure modes
The most common ways code-reviewer skill output becomes weak are:
- no base branch specified
- no diff provided
- no statement of intended behavior
- huge PRs with no priorities
- no project pattern references
- asking for “everything” and getting generic advice back
Most of these are input problems, not skill problems.
Use the checklist and security references explicitly
If your first review is too broad, ask for a second pass using specific repo references:
references/checklist.mdfor completenessreferences/security.mdfor sensitive changesreferences/patterns.mdfor consistency and anti-pattern detection
This is one of the easiest ways to improve the code-reviewer guide in day-to-day use.
Iterate after the first review
A good second prompt is:
Now re-review only the files with high-severity findings.
Assume the author wants merge-blocking issues only.
Double-check whether each finding is a real defect, a security exposure, or a missing test that hides regression risk.
That follow-up often removes low-value comments and sharpens the final recommendation.
Customize code-reviewer to your team workflow
If you adopt code-reviewer regularly, align it to your merge culture:
- define what counts as blocker vs suggestion
- name your base branch convention
- include your test expectations
- add team-specific security checks
- point the skill to representative files for local style
That is how to turn code-reviewer install into a workflow improvement rather than just another prompt shortcut.
