code-reviewer
by Shubhamsaboocode-reviewer is a lightweight skill for Code Review that turns code or diffs into a structured report covering security, performance, best practices, severity, affected lines or sections, recommended fixes, and an overall quality score.
This skill scores 66/100, which means it is acceptable to list for directory users who want a lightweight code review prompt scaffold, but they should expect limited operational depth beyond the core checklist and report format.
- Trigger conditions are explicit: reviewing code, security audits, code quality checks, and pull requests.
- Provides a simple review framework across security, performance, and best practices.
- Defines a structured output format with severity, location, fix, and overall score, which helps agents respond consistently.
- No concrete workflow for pull requests, multi-file reviews, or how to inspect code beyond a generic checklist.
- Lacks examples, supporting files, and constraints, so agents may need extra prompting to apply findings consistently.
Overview of code-reviewer skill
The code-reviewer skill is a lightweight review prompt packaged as a reusable skill for Code Review tasks. Its job is simple: take a code snippet, pull request diff, or file, then return a structured review focused on security issues, performance problems, and general engineering best practices.
What code-reviewer is best for
code-reviewer is a good fit if you want a fast first-pass reviewer that consistently checks for:
- security flaws such as injection risks, XSS, hardcoded secrets, and unsafe data handling
- performance issues such as redundant loops, memory concerns, and missed caching opportunities
- maintainability problems such as unclear naming, weak error handling, poor documentation, and DRY violations
It is most useful for developers reviewing pull requests, auditing suspicious code, or adding a repeatable review checklist to an AI workflow.
The real job-to-be-done
Most users are not looking for a generic opinion on code. They want an actionable review that tells them:
- what is wrong
- how severe it is
- where it is located
- what to change next
That is the main value of the code-reviewer skill: it pushes the model toward a review report instead of an unstructured stream of comments.
Why choose this over a plain prompt
The main differentiator of the code-reviewer skill is not deep automation or repo-aware tooling. It is a stable review frame. The skill already defines:
- the review dimensions
- the expected output structure
- a severity model
- an overall quality score
That helps reduce prompt drift when you want repeated reviews across many files or PRs.
What this skill does not include
This repository entry is intentionally minimal. It only contains SKILL.md; there are no helper scripts, rule files, references, or language-specific checklists. That means code-reviewer is best treated as a reusable review template, not a full static-analysis replacement and not a framework-specific security auditor.
How to Use code-reviewer skill
Install code-reviewer in your skills environment
If you are using the Skills workflow from the repository ecosystem, install code-reviewer with:
npx skills add Shubhamsaboo/awesome-llm-apps --skill code-reviewer
After installation, the main file to inspect is:
SKILL.md
Because this skill has no extra support files, you can understand nearly all of its behavior by reading that one file.
Read SKILL.md before relying on it
SKILL.md tells you exactly what the model will optimize for:
- Security
- Performance
- Best Practices
- Output Format
This matters because the code-reviewer guide is only as strong as the review dimensions it names. If your team also cares about concurrency, API compatibility, test coverage, accessibility, or framework-specific risks, you will need to ask for those explicitly in your prompt.
What input code-reviewer needs
The code-reviewer usage quality depends heavily on the input you provide. Best inputs are:
- a focused diff from a pull request
- a single file or small related file set
- enough surrounding context to understand data flow
- the language and framework
- the intended behavior
Weak input:
- “Review this code” followed by a large pasted file with no context
Stronger input:
- “Review this Python FastAPI diff for security and performance. Focus on authentication, SQL handling, and error paths. This endpoint should only return the current user's records.”
Turn a rough request into a strong review prompt
A rough goal usually sounds like:
- “Check whether this is safe to merge.”
A better prompt for code-reviewer for Code Review looks like:
- what the code is supposed to do
- what changed
- what risks matter most
- whether you want only findings or findings plus patch suggestions
Example prompt shape:
- “Use
code-revieweron this Node.js PR diff. Prioritize SQL injection, secret leakage, and expensive repeated queries. For each issue, give severity, affected line/section, and a concrete fix. If no issue is found in an area, say so briefly.”
That prompt works better because it aligns with the skill's built-in structure while narrowing the review to your real merge risks.
Best workflow for pull requests
A practical workflow is:
- Run
code-revieweron the diff, not the whole repository. - Ask for only High and Critical findings first.
- Review the flagged locations manually.
- Run a second pass for maintainability and lower-severity cleanup.
- If needed, ask for patch-style fix suggestions for the top findings.
This staged approach avoids burying important issues under style comments.
Best workflow for file-level audits
For a single file or function:
- provide the file content
- explain inputs, outputs, and trust boundaries
- identify whether data comes from users, databases, or third-party APIs
- ask the skill to trace risky paths
This is especially important for security reviews, because the skill can only reason from the code you show it.
How to get better line-specific findings
The skill asks for “the specific line or section with the issue,” but models often need help with precise localization. To improve that:
- paste code with line numbers when possible
- keep snippets short enough to preserve structure
- include function names or file paths
- separate old and new code in diffs clearly
If you provide a massive unnumbered file, expect weaker location references.
When to use code-reviewer on a diff vs full file
Use a diff when:
- you want merge-oriented feedback
- you already trust unchanged code
- you need fast triage
Use a full file when:
- the change depends on surrounding helpers
- data validation happens elsewhere
- the review needs control-flow context
For most teams, starting with the diff and escalating to the full file only when needed is the highest-signal code-reviewer usage pattern.
What output to expect
The skill is designed to return:
- a severity rating for each finding
- the line or section involved
- a recommended fix
- an overall code quality score from 1 to 10
This makes it easier to plug the output into PR comments, internal checklists, or review summaries without reformatting everything manually.
Practical limits before you install
Before adopting code-reviewer, know its limits:
- it does not run code
- it does not parse dependencies automatically
- it has no language-specific rule packs in this repo folder
- it cannot validate whether a reported issue is reachable in production without context
That means you should use it as a reasoning-based reviewer, then confirm high-impact findings with tests, linters, or security tools.
code-reviewer skill FAQ
Is code-reviewer good enough for production security review
No. code-reviewer is useful for surfacing likely security issues early, but it should not replace SAST, dependency scanning, secret scanning, or human review on sensitive code. It is best as an upstream filter that catches obvious or plausible problems before formal review.
Is the code-reviewer skill beginner-friendly
Yes. The structure is simple, and there are no extra files or setup dependencies beyond your normal skills environment. The main beginner challenge is input quality: vague prompts create vague reviews. If you explain what the code should do and where the trust boundaries are, beginners can still get useful output quickly.
How is code-reviewer different from asking an LLM to review code
A plain prompt often produces inconsistent review criteria. The code-reviewer skill keeps the model anchored to a repeatable checklist and output format. You still need to provide context, but the skill reduces the chance of getting a rambling, non-prioritized answer.
When is code-reviewer a poor fit
Skip code-reviewer or supplement it heavily when you need:
- framework-specific compliance checks
- deep architectural review across many files
- exact runtime behavior validation
- strict language-idiom enforcement
- automated code modifications
This skill is deliberately broad and lightweight, so it is not the best fit for highly specialized audits.
Can code-reviewer review non-security code quality issues
Yes. It explicitly covers naming, error handling, documentation, and DRY concerns in addition to security and performance. If your main goal is maintainability rather than vulnerability finding, it can still be useful, but you should say that in the prompt so the balance of feedback shifts accordingly.
Do I need to read the repository before using code-reviewer
Not much. For this skill, reading SKILL.md is usually enough because there are no support folders, scripts, or metadata files that materially change behavior. That low overhead is a plus if you want quick adoption.
How to Improve code-reviewer skill
Give code-reviewer the risk model explicitly
The fastest way to improve code-reviewer output is to tell it what failure matters most:
- auth bypass
- injection
- unsafe file access
- expensive queries
- race conditions
- weak error handling
Without that, the skill may spread attention evenly across too many categories and miss what you care about most.
Add the missing context the skill cannot infer
Provide:
- language and framework
- whether the code is backend, frontend, or infra
- trusted vs untrusted inputs
- performance expectations
- whether this is new code or a regression check
This changes the quality of findings more than adding more code volume.
Narrow the review unit
A common failure mode is reviewing too much code at once. Smaller units improve accuracy:
- one diff
- one endpoint
- one service method
- one config block
If you paste an entire subsystem, findings often become generic and harder to verify.
Ask for evidence-backed findings only
To reduce hallucinated issues, instruct the model to:
- cite the exact code path or line range
- explain why the issue is plausible from the shown code
- separate confirmed observations from speculative concerns
This makes code-reviewer more trustworthy in real review workflows.
Request fixes in the right form
If you want output you can act on quickly, ask for one of these:
- minimal remediation steps
- patch-style suggestions
- safer alternative patterns
- merge-blocker vs follow-up classification
“Recommended fix” is built in, but specifying the form of the fix makes the result more usable.
Calibrate severity to your team
Severity labels are only useful if they match your merge standards. Improve the code-reviewer guide for your workflow by telling it what counts as:
- Critical: immediate exploit or data loss risk
- High: likely real issue needing pre-merge fix
- Medium: important but not merge-blocking
- Low: cleanup or maintainability concern
Otherwise, severity may look plausible but not map to your actual review policy.
Run a second pass after the first review
After the first output, do not just ask “anything else?” Instead, iterate with targeted follow-ups:
- “Re-check only auth and session handling.”
- “Now ignore style and focus on expensive database access.”
- “Challenge your previous findings and remove weak ones.”
- “Suggest tests that would validate the top two issues.”
This produces a sharper second pass than repeating the original request.
Use code-reviewer with other quality gates
The best adoption pattern is to combine code-reviewer install and prompt-based review with:
- linters
- test suites
- type checks
- dependency scanners
- human PR review
The skill adds reasoning and prioritization, but it works best when paired with tools that can verify facts automatically.
Improve the skill for your own team
Because this skill is minimal, it is easy to extend. If you fork or adapt it, the highest-value improvements are:
- add language-specific review criteria
- add framework-specific security checks
- define clearer severity rules
- include examples of good inputs
- add separate modes for PR review vs full-file audit
Those changes materially improve output quality more than cosmetic edits.
