code-reviewer
by Shubhamsaboocode-reviewer is an AI code review skill that follows a strict review order: security, performance, correctness, and maintainability. It uses rule files for SQL injection, XSS, N+1 queries, error handling, naming, and type hints, making PR reviews more consistent than a generic review prompt.
This skill scores 78/100, which means it is a solid directory listing for users who want a lightweight, rule-based code review aid. It is triggerable and understandable quickly, and the included examples give agents more concrete review behavior than a generic prompt, but users should expect limited rule coverage and a document-driven workflow rather than a fully operational review system.
- Triggering is clear: SKILL.md explicitly says to use it for PR review, security audits, performance checks, and pre-deployment review.
- Operational structure is easy to follow: AGENTS.md compiles all rules and SKILL.md gives a priority order of Security → Performance → Correctness → Maintainability.
- The rule files provide concrete, reusable review leverage with bad/good examples for SQL injection, XSS, N+1 queries, error handling, naming, and type hints.
- Coverage is narrow: only six review rules are included, so it is not a full general-purpose code review framework.
- No install command or executable workflow is provided, so agents still need to infer how to apply the guidance during a review.
Overview of code-reviewer skill
The code-reviewer skill is a focused review framework for AI-assisted Code Review. Instead of relying on one broad prompt, it gives the agent a clear review order and concrete rules for high-value issues first: security → performance → correctness → maintainability. For most teams, that is the real job-to-be-done: catch risky defects early, not generate vague style comments.
Who should install code-reviewer
code-reviewer is best for developers, reviewers, and AI-agent users who want more consistent PR reviews without building a custom review checklist from scratch. It fits especially well if you review web apps, backend code, database access, or Python/JavaScript code where security and data-access mistakes are costly.
What makes code-reviewer different from a generic review prompt
The main differentiator is that the code-reviewer skill is backed by explicit rule files, not just a short instruction. The repository includes targeted guidance for:
- SQL injection prevention
- XSS prevention
- N+1 query detection
- error handling
- naming clarity
- type hints
That makes it more reliable for common high-impact review patterns than “please review this code” alone.
What users usually care about first
Before installing, most users want to know:
- Will it find important issues or mostly nitpick?
- Is it useful on partial diffs, not just full repos?
- Does it help with security and performance, not only style?
- How much setup is needed?
For those questions, code-reviewer scores well on issue prioritization and low setup, but its coverage is intentionally narrow. It is strongest when your main goal is structured review against the included rules.
Best-fit and misfit cases
Best fit:
- PR review before merge
- quick security sanity checks
- review of DB-access code
- frontend rendering/output safety checks
- code quality pass on Python or JavaScript
Misfit:
- deep architecture review across many services
- framework-specific lint replacement
- language-specific static analysis at compiler depth
- compliance-heavy audits that need formal standards mapping
How to Use code-reviewer skill
code-reviewer install options
If your agent environment supports the skills CLI, install code-reviewer from the upstream repository with:
npx skills add Shubhamsaboo/awesome-llm-apps --skill code-reviewer
If your setup does not use that CLI, open the source at awesome_agent_skills/code-reviewer/ and load the skill files manually into your agent workflow.
Read these files first
To use code-reviewer well, read the files in this order:
SKILL.md— what the skill is for and its review priorityAGENTS.md— compiled review guidance with examplesrules/security-sql-injection.mdrules/security-xss-prevention.mdrules/performance-n-plus-one.mdrules/correctness-error-handling.mdrules/maintainability-naming.mdrules/maintainability-type-hints.md
This path gets you from decision-making to concrete examples quickly.
The review priority that matters in practice
A practical strength of code-reviewer usage is its built-in ordering:
- Security
- Performance
- Correctness
- Maintainability
Use that order in prompts too. It prevents the common failure mode where the agent spends half the review on naming and formatting while missing injection risk or database inefficiency.
What input code-reviewer needs
The skill works best when you provide:
- the diff or changed files
- the language/framework
- user-controlled inputs
- database/query layer details
- rendering/output context
- what kind of review you want: PR gate, security pass, or broader quality review
Minimal input can still work, but review quality rises sharply when the agent can see where data comes from and where it ends up.
Turn a rough request into a strong code-reviewer prompt
Weak prompt:
Review this code.
Stronger prompt:
Use the code-reviewer skill on this PR diff.
Prioritize findings in this order: security, performance, correctness, maintainability.
Focus especially on:
- SQL injection risk in database access
- XSS risk in rendered user content
- N+1 query patterns
- missing or weak error handling
For each finding, give:
1. severity
2. exact location
3. why it matters
4. a safer or faster alternative
5. whether it blocks merge
This structure aligns directly with the repository’s rule design, so the agent has less guesswork.
Best workflow for pull request review
A good code-reviewer guide workflow is:
- Pass the PR diff first
- Ask for only blocking and high-severity issues
- Fix those
- Run a second pass for correctness and maintainability
- Ask for patch suggestions only after the findings are stable
This two-pass approach keeps the first review high-signal and avoids burying serious issues under medium-priority cleanup.
What the rules are actually good at finding
Based on the included files, code-reviewer for Code Review is especially useful for:
- raw SQL built with string interpolation
- unsafe HTML rendering or dangerous DOM insertion
- ORM patterns that trigger N+1 queries
- broad
except:handling or swallowed errors - unclear naming that hides intent
- missing type hints in codebases where they improve maintainability
Those are common, expensive mistakes, and the examples in the repo make the detection criteria clearer than a generic review prompt.
Where the skill is intentionally limited
The current rule set is not broad enough to cover every review category. For example, there is no large built-in catalog for:
- authentication/authorization design
- concurrency hazards
- caching strategy
- API contract stability
- test quality
- infrastructure or deployment review
So install code-reviewer if its specific rule coverage matches your main risks, not because you expect a complete review system.
How to ask for better findings, not more findings
If you want useful output, ask the agent to avoid generic comments and to report only issues that meet a threshold. Example:
Use the code-reviewer skill.
Only report issues that are:
- exploitable security risks
- likely production performance problems
- correctness bugs with user or data impact
- maintainability problems that materially reduce readability or safety
Do not comment on formatting unless it affects correctness or security.
That keeps the review aligned with the skill’s strongest value.
How to use code-reviewer on partial context
You do not need the full repository for every run. The skill still works on:
- a single diff
- one controller and one template
- one ORM query path
- one function with its callers
But if you are reviewing security or N+1 patterns, include enough surrounding code to show:
- where user input enters
- how it is validated
- how the query is built
- how output is rendered
- whether loops trigger repeated queries
Suggested output format for teams
For team adoption, ask the agent to return findings like this:
Severity: Critical / High / Medium
Category: Security / Performance / Correctness / Maintainability
Rule: specific rule name
Location: file + line or function
Issue: one-sentence summary
Why it matters: concrete impact
Recommended fix: actionable change
Confidence: high / medium / low
This makes code-reviewer usage easier to compare across PRs and reviewers.
code-reviewer skill FAQ
Is code-reviewer worth installing if I already write good review prompts?
Usually yes, if your current prompts are inconsistent. The biggest benefit is not “smarter AI,” but a repeatable review frame with explicit high-priority rules. If your current prompt already enforces security-first review with concrete examples, the gain will be smaller.
Is code-reviewer beginner-friendly?
Yes. The source files are easy to scan, and AGENTS.md gives examples that explain what bad and good code look like. Beginners can use it as both a review tool and a review checklist.
Does code-reviewer replace linters or static analyzers?
No. code-reviewer is a reasoning aid, not a deterministic analyzer. It complements linters, SAST tools, type checkers, and tests. Use it when you want contextual judgment on code changes, especially around common web and database risks.
Which languages and stacks fit best?
The examples clearly favor Python and JavaScript-style code, especially:
- SQL access layers
- web rendering flows
- ORM-backed applications
- frontend output handling
You can still adapt the skill elsewhere, but the strongest built-in value is around those patterns.
When should I not use code-reviewer?
Skip it if your main need is:
- formatting enforcement
- broad architecture assessment
- framework-specific compiler rules
- compliance evidence generation
- exhaustive language coverage
In those cases, code-reviewer skill may feel too narrow.
Can code-reviewer review full repos, not just PRs?
Yes, but it is better suited to scoped review. Full-repo review often creates too many low-context findings. For best results, review changed files, risky modules, or a defined feature path.
How to Improve code-reviewer skill
Start with the highest-risk paths
To get more value from code-reviewer, point it at code where the included rules matter most:
- request handlers
- template rendering
- query builders
- ORM list endpoints
- error-prone integration boundaries
This produces better signal than running it blindly over utility code.
Provide data-flow context explicitly
A common failure mode is weak security review because the agent cannot trace input to sink. Improve results by stating:
- what input is user-controlled
- what fields hit the database
- what content is rendered into HTML
- what loop or resolver may cause repeated queries
That lets the skill apply its SQL injection, XSS, and N+1 rules with much higher confidence.
Ask for rule-based evidence
A strong way to improve code-reviewer output is to require rule linkage:
Use code-reviewer and tie each finding to the closest rule in AGENTS.md or rules/.
If no rule applies clearly, mark the finding as lower confidence.
This reduces hand-wavy comments and makes the review easier to trust.
Reduce false positives with merge-bar criteria
If the first run is too noisy, tighten the prompt:
- only include issues with production impact
- separate blockers from suggestions
- exclude pure style comments
- require a concrete fix path
This improves adoption because reviewers can act on the output quickly.
Iterate after the first review
The best second-pass prompt is usually not “review again,” but:
Re-run code-reviewer on the updated diff.
Check whether the previous high-severity findings are actually resolved.
Then look for any newly introduced correctness or maintainability issues caused by the fixes.
That catches regression fixes that create new problems.
Extend the skill carefully if your team adopts it
If code-reviewer becomes part of your workflow, the most useful improvement is to add more rule files in the same style:
- auth and authorization checks
- secrets handling
- CSRF/session safety
- caching misuse
- async/concurrency issues
- test coverage expectations
Keep the same pattern: why it matters, bad example, good example, and impact level. That preserves the skill’s clarity while broadening coverage.
