A

code-review-and-quality

by addyosmani

code-review-and-quality is a structured pre-merge review skill that checks correctness, readability, architecture, security, and performance. Install it from the parent repo, read skills/code-review-and-quality/SKILL.md, and use it with diffs, task context, and test results for stronger review decisions.

Stars18.7k
Favorites0
Comments0
AddedApr 21, 2026
CategoryCode Review
Install Command
npx skills add addyosmani/agent-skills --skill code-review-and-quality
Curation Score

This skill scores 78/100, which makes it a solid directory listing: agents get a clearly defined trigger and a substantive review framework that should reduce guesswork compared with a generic 'review this code' prompt, though users should expect a document-driven skill rather than a tool-backed workflow.

78/100
Strengths
  • Strong triggerability: the frontmatter and 'When to Use' section clearly position it for pre-merge review, post-feature review, refactors, and bug-fix validation.
  • Good operational substance: the skill defines a five-axis review model (correctness, readability, architecture, security, performance) plus approval guidance focused on improving overall code health rather than demanding perfection.
  • High agent leverage from reusable review criteria: the long, structured SKILL.md with many headings and code fences gives agents a consistent checklist-style framework for reviewing code across multiple dimensions.
Cautions
  • No install command, scripts, or support files are provided, so execution depends on the agent interpreting prose instructions rather than invoking a concrete workflow.
  • Repository evidence shows limited practical artifacts beyond SKILL.md, which may leave output format, prioritization, and repo-specific adaptation somewhat open-ended.
Overview

Overview of code-review-and-quality skill

What the code-review-and-quality skill does

The code-review-and-quality skill is a structured review workflow for checking code before merge. Instead of giving a generic “review this PR” prompt, it pushes the agent to assess a change across five concrete axes: correctness, readability, architecture, security, and performance. That makes it useful when you want a decision-ready review, not just scattered comments.

Who should install it

Best fit: engineers, tech leads, and AI-assisted coding users who already ship code through PRs and want a repeatable quality gate. It is especially valuable when code was written by another agent, when a bug fix needs regression scrutiny, or when a refactor looks “clean” but may hide correctness or design regressions. If you mainly want style linting, this is broader than that.

What users actually care about

Most users evaluating the code-review-and-quality skill care about three things: whether it catches real risks, whether it blocks too much, and whether it works with ordinary repositories. The strongest differentiator here is the approval standard: approve when the change clearly improves code health, even if it is not perfect. That makes it more practical than review prompts that over-index on personal preference.

What it does not replace

This skill is not a static analyzer, test runner, or policy engine by itself. It improves review quality, but it still depends on the code, diff, task context, tests, and conventions you provide. If you do not supply the intended behavior, affected files, or known constraints, the review will be less reliable than the workflow suggests.

How to Use code-review-and-quality skill

Install context and where to read first

For code-review-and-quality install, add the parent skill repo in your skills-enabled environment, then open skills/code-review-and-quality/SKILL.md first. This skill appears to be self-contained: there are no extra rules/, resources/, or helper scripts in the skill folder, so the main document is the implementation. Read the sections for overview, when to use, and the five-axis review before trying to invoke it loosely.

What input the skill needs to review well

The code-review-and-quality usage quality depends heavily on inputs. Provide:

  • the diff or changed files
  • the original task, issue, or acceptance criteria
  • the language/framework
  • test status or failing cases
  • any non-obvious constraints such as backwards compatibility, latency budgets, or security requirements

A weak prompt is: “Review this code.”
A stronger prompt is: “Use the code-review-and-quality skill to review this auth PR. Focus on correctness, security, and regression risk. Here is the diff, expected login behavior, known edge cases, and current test output. Separate must-fix issues from non-blocking suggestions.”

Turn a rough goal into a complete prompt

A good code-review-and-quality guide prompt should ask for both findings and a merge recommendation. Useful template:

  • what changed
  • why it changed
  • what “correct” behavior looks like
  • what to prioritize among the five axes
  • output format: blockers, warnings, suggestions, approval recommendation

Example:
“Use code-review-and-quality for Code Review on this payment retry change. Review across correctness, readability, architecture, security, and performance. Prioritize correctness and idempotency. Check whether tests cover retry limits and duplicate charge prevention. Return: 1) blockers, 2) non-blocking improvements, 3) approve / approve with changes / do not approve.”

Practical workflow and output tips

Use this skill after implementation and before merge, not only after problems appear. A practical workflow is:

  1. Gather diff, task spec, and test results.
  2. Invoke the skill with axis priorities.
  3. Ask follow-up questions on any blocker.
  4. Revise code.
  5. Re-run the same review prompt on the updated diff.

Quality improves when you ask the agent to cite file paths, functions, edge cases, and missing tests for each finding. That prevents vague reviews and makes comments actionable in real PRs.

code-review-and-quality skill FAQ

Is code-review-and-quality better than a normal review prompt?

Usually yes, if your problem is inconsistent review depth. The value is not magic analysis; it is the forced coverage model. Generic prompts often over-focus on style or whatever looks easiest to critique. code-review-and-quality skill is stronger when you need balanced review across correctness, maintainability, security, and performance.

Is it suitable for beginners?

Yes, but with one condition: beginners need to provide more context than they expect. Without acceptance criteria or expected behavior, the review may sound confident while missing domain-specific issues. For junior teams, this skill is most useful as a checklist-backed reviewer, not as the sole merge authority.

When is this skill a poor fit?

Skip code-review-and-quality when you only need formatter-level feedback, a single-axis audit, or an automated policy check. It is also a weaker fit for huge changes with no clear spec, because review quality falls when the task itself is ambiguous. In that case, first break the change into smaller reviewable units.

Does it work across languages and repos?

Yes, because the framework is conceptual rather than language-specific. But ecosystem fit still matters: architecture expectations in a React app, a Go service, and a Python data pipeline are different. The more repo conventions you provide, the better the review will align with local standards instead of generic best practices.

How to Improve code-review-and-quality skill

Give the skill stronger evidence, not more adjectives

The biggest upgrade is better inputs. For code-review-and-quality, “be thorough” helps less than supplying:

  • exact files changed
  • expected outputs
  • known edge cases
  • tests added or missing
  • project conventions that matter

If you want fewer false positives, tell the agent what is intentionally out of scope. If you want deeper review, point it to risky areas like concurrency, authorization, migrations, caching, or external API handling.

Prevent common failure modes

Typical failure modes are predictable: overemphasis on style, missing domain constraints, shallow security comments, and recommendations that ignore the project’s existing patterns. Counter this by asking the skill to distinguish:

  • objective defects vs preference
  • merge blockers vs cleanup ideas
  • local code smell vs system-level architecture risk

That framing matches the skill’s “improve the codebase, don’t chase perfection” philosophy.

Iterate after the first review

Do not stop at the first pass. If the initial code-review-and-quality usage output is generic, ask follow-ups such as:

  • “Which findings are most likely to cause production bugs?”
  • “Which concerns are unsupported by the diff?”
  • “What test cases would prove or disprove your top two blockers?”
  • “Re-review after these fixes and tell me what risk remains.”

This turns the skill from a checklist into a review loop.

Calibrate approvals to your team

To improve the code-review-and-quality skill, align the approval threshold with your team’s real merge policy. The skill’s core judgment is sensible: approve code that materially improves health, even if imperfect. Reinforce that by requesting a final decision in three tiers: safe to merge, merge after fixes, or needs redesign. That keeps review output useful for actual shipping decisions instead of endless commentary.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...