design-review
by garrytandesign-review is a UX-minded design QA skill for auditing live interfaces, spotting spacing, hierarchy, visual consistency, and interaction issues, then fixing them iteratively with verification. It supports plan-mode review before implementation and is useful when you want a design-review guide for concrete source changes instead of vague advice.
This skill scores 84/100, which means it is a solid directory listing candidate for users who want a specialized design QA workflow instead of a generic prompt. The repository gives enough trigger guidance and execution detail that an agent can likely use it with less guesswork, though users should note it is oriented toward live-site visual fixes rather than broad design planning.
- Explicit trigger language for "visual design audit," "design qa," and "fix design issues" makes it easy to invoke correctly.
- Strong operational scope in the description: it targets visual inconsistency, spacing, hierarchy, AI slop patterns, and slow interactions, then iteratively fixes issues with re-verification.
- The SKILL.md body is substantial and structured, with many workflow and constraint signals plus repo/file references, suggesting real execution guidance rather than a placeholder.
- No install command, scripts, references, or support files are present, so adoption depends almost entirely on the SKILL.md content.
- The skill is specialized for design review and live implementation; plan-mode review is explicitly split to /plan-design-review, so it is not a one-size-fits-all design skill.
Overview of design-review skill
What design-review does
design-review is a design QA skill for catching visual inconsistency, spacing problems, hierarchy issues, AI-slop patterns, and sluggish interaction polish in a real codebase, then fixing them iteratively with verification. It is best for teams that want a design-review skill that can move from “this feels off” to concrete source changes and before/after validation.
Who should use it
Use the design-review skill when you need a UX-minded agent to audit a live interface, polish an implementation, or review a UI that already exists in code. It is a strong fit for product designers, front-end engineers, and agents working on apps where visual consistency matters more than generating net-new layouts.
What makes it different
Unlike a generic prompt for UX Audit, this design-review guide is workflow-aware: it is built to identify issues, make atomic fixes, and re-check the result. It also supports plan-mode separation, which matters when you want review before implementation rather than immediate edits.
How to Use design-review skill
Install and route the task
Start with the design-review install command from your skills manager, then point the agent at the repo containing the UI you want reviewed. If you are working in plan mode, route to /plan-design-review; if you want code changes, use the active review path described in the skill.
Give the skill the right input
A good design-review usage prompt names the screen, user flow, and failure you care about. Better: “Review the checkout modal for spacing, hierarchy, and button clarity on mobile, then fix the worst issues.” Worse: “Make this look better.” The first gives the skill a target, a constraint, and a success condition.
Read these files first
For a design-review guide that actually helps you decide fit, read SKILL.md first, then any generated template file such as SKILL.md.tmpl if present. Also inspect the repo tree for prompt-routing or helper conventions, because this repository is centered on the skill body rather than on supporting scripts or docs.
Run it like a review loop
Use the skill in short cycles: inspect, patch, verify, repeat. Ask it to show what it changed and why, and prefer one issue class per pass, such as typography, spacing, or interaction latency. That keeps the review focused and makes regressions easier to spot.
design-review skill FAQ
Is design-review only for final polish?
No. The design-review skill works for live-site polish, codebase cleanup, and UX Audit-style checks where visual quality already exists but needs tightening. It is less useful for early concept ideation than for fixing and validating an implemented interface.
Do I need to be a designer to use it?
No, but you do need to describe the screen and the problem clearly. Non-designers get better results when they specify what feels wrong, who the user is, and what success looks like, instead of asking for a vague redesign.
How is this different from a normal prompt?
A normal prompt may produce suggestions; design-review is oriented toward finding issues in code, changing them, and checking the outcome. If you only want advice, the skill may be more than you need; if you want measurable UI repair, it is a better fit.
When should I not use design-review?
Skip it when the task is purely strategic, brand-level, or content-only with no interface to inspect. It is also a weaker fit if you cannot access the codebase or cannot verify screenshots after changes, because the skill’s value comes from iterative validation.
How to Improve design-review skill
Start with a specific review target
The best design-review results come from a narrow target: one page, one component, or one user flow. Give the skill the viewport, device context, and primary concern, such as “desktop settings page, focus on alignment and scanability” or “mobile pricing card, focus on tap clarity.”
Tell it what to optimize for
If you care most about consistency, accessibility, conversion clarity, or interaction speed, say so up front. That helps the design-review skill choose between competing fixes, especially when a change improves one area but weakens another.
Watch for common failure modes
Common weak inputs are “make it cleaner,” “improve the design,” and “audit everything.” Those prompts invite broad, low-signal output. Stronger inputs name the defect class, the component, and the tolerance for change, which reduces unnecessary edits and improves the design-review usage loop.
Iterate with evidence
After the first pass, ask for the remaining top issues, the exact files changed, and any screenshot-based regressions still visible. If the result is close but not enough, refine the brief with a sharper constraint: “keep layout unchanged,” “do not alter color palette,” or “only fix hierarchy and spacing.”
