critique
by pbakausThe critique skill helps teams run structured UX reviews on pages, features, and components. It evaluates hierarchy, cognitive load, heuristics, and persona-based risks, then turns findings into actionable fixes. Best used after /frontend-design with clear screenshots, goals, and user context.
This skill scores 78/100, which means it is a solid directory listing candidate for agents that need structured UX critique rather than a generic feedback prompt. The repository gives clear trigger language, a substantial critique framework, and supporting references for scoring, cognitive load, and persona testing, though adoption still depends on another prerequisite skill and some operational interpretation.
- Strong triggerability: the frontmatter explicitly says to use it when asked to review, critique, evaluate, or give feedback on a design or component.
- Material agent leverage: it defines a multi-dimensional UX critique workflow with quantitative scoring, persona-based testing, and actionable feedback expectations.
- Good supporting evidence: bundled references for cognitive load, heuristics scoring, and personas make the critique more repeatable than a generic prompt.
- Requires dependency chaining: SKILL.md mandates invoking /frontend-design and possibly /teach-impeccable before proceeding.
- Execution is text-heavy and policy-like; there are no scripts, examples, or quick-start output templates to reduce agent guesswork further.
Overview of critique skill
What the critique skill does
The critique skill is a structured UX review workflow for evaluating a page, feature, or component as a designed experience, not just as working UI. It pushes the model to inspect visual hierarchy, information architecture, emotional tone, cognitive load, and usability heuristics, then turn that into concrete feedback instead of vague opinions.
Who should install critique
This critique skill is best for designers, frontend engineers, product teams, and AI builders who regularly want fast UX audit-style feedback on interfaces. It is especially useful when you have a screenshot, a live page, or a built component and want a sharper review than a generic “what do you think of this design?” prompt.
Best job-to-be-done
Use critique when the real task is: “Tell me why this interface works or fails, what users will struggle with, and what I should change first.” It is a good fit for design reviews, pre-launch checks, AI-generated UI cleanup, and critique for UX Audit workflows where prioritization matters more than aesthetics alone.
What makes this skill different
The strongest differentiator is that critique is opinionated. It does not stop at broad design commentary. It explicitly checks for “AI slop” patterns, uses heuristic scoring, and recommends persona-based testing. That makes the output more diagnostic and more repeatable than ordinary critique prompts.
Important dependency before use
This skill is not standalone in practice. Its own instructions require the /frontend-design skill first, and that skill’s context-gathering protocol must be followed. If no design context exists yet, the repository says to run /teach-impeccable before critique. That dependency is the main adoption blocker to understand up front.
How to Use critique skill
Install context and repo path
The critique skill lives in .agents/skills/critique inside pbakaus/impeccable. If you use a skill loader, install from that repository and select the critique skill. If your environment supports direct repo-based skill loading, point it at:
pbakaus/impeccable- skill:
critique
If you manually inspect before installing, start here:
.agents/skills/critique/SKILL.md.agents/skills/critique/reference/cognitive-load.md.agents/skills/critique/reference/heuristics-scoring.md.agents/skills/critique/reference/personas.md
Read this first before your first critique install
Do not treat this as a drop-in prompt snippet. The skill assumes prior design context. The repository makes /frontend-design mandatory and says to follow its context-gathering protocol before running critique. If you skip that, output quality will drop because the model lacks goals, audience, and interface intent.
What input the critique skill needs
For strong critique usage, provide:
- the interface area being reviewed
- screenshots or a clear visual description
- the product goal
- target users
- primary task the user is trying to complete
- constraints such as platform, brand, accessibility, or conversion goals
Minimal input works, but the critique gets much better when the model knows what success looks like.
The best invocation pattern
The skill’s argument hint is [area (feature, page, component...)]. In practice, invoke it with a specific scope such as:
critique checkout pagecritique onboarding modalcritique dashboard sidebarcritique pricing page for UX Audit
Specific scopes produce more actionable feedback than “critique my app”.
Turn a rough request into a strong critique prompt
Weak request:
- “Critique this UI.”
Better request:
- “Critique this settings page for UX Audit. The goal is to help first-time users enable notifications without confusion. Audience is non-technical SMB owners. Prioritize visual hierarchy, cognitive load, and whether the main action is obvious.”
Why this works:
- it names the user
- it names the task
- it names the success criterion
- it tells the skill what to prioritize
Suggested workflow in practice
A practical critique guide flow is:
- Gather context with
/frontend-design. - State the product goal and user task.
- Pass the exact screen, feature, or component to
critique. - Ask for findings grouped by severity.
- After the first review, ask for revised recommendations constrained by your engineering or brand limits.
This sequence is more reliable than asking for critique and redesign in one shot.
What the skill evaluates well
Based on the repository, the critique skill is strongest at:
- spotting generic AI-generated UI patterns
- assessing hierarchy and clarity
- identifying cognitive overload
- applying heuristic scoring
- pressure-testing flows through relevant personas
That makes it useful for triage: what looks polished but still fails users.
How to use the reference files well
The reference files matter more than they look.
reference/cognitive-load.md helps the model distinguish task complexity from bad design complexity, which leads to better recommendations.
reference/heuristics-scoring.md adds a concrete 0–4 scoring frame across Nielsen heuristics, useful when you want a comparable review across multiple screens.
reference/personas.md is best used selectively. Pick 2–3 personas that match the actual audience rather than forcing all five every time.
Good prompts for critique for UX Audit
If your goal is critique for UX Audit, ask for a structured output such as:
- top 5 usability risks
- heuristic scores with brief evidence
- likely failure points for chosen personas
- highest-priority fixes first
- what to keep unchanged
That format produces a review you can hand to a team without rewriting it.
Common misuse that lowers output quality
The biggest misuse is asking for design feedback with no interface, no screenshot, and no task context. Another common mistake is using the critique skill to generate new UI from scratch. This skill is better at evaluating and prioritizing issues than inventing full design systems.
critique skill FAQ
Is critique beginner-friendly?
Yes, but only if you provide basic context. A beginner can get value quickly by sharing one screen and one user goal. Without that, the critique skill may sound authoritative while missing the real product problem.
Is this better than a normal critique prompt?
Usually yes. The value is not just wording; it is the built-in review frame: AI slop detection, cognitive load analysis, heuristic scoring, and persona testing. That gives critique usage more consistency than a generic prompt.
Do I need the frontend-design skill first?
Effectively yes. The repository marks it as mandatory. If you want the critique install to be useful on day one, plan to use it with /frontend-design rather than in isolation.
What kind of artifacts work best?
Best inputs are screenshots, rendered pages, prototypes, or detailed interface descriptions with clear task context. Code alone is less useful unless the UI behavior is described or visible.
When should I not use critique?
Do not use critique when you need:
- deep code-level implementation review
- accessibility compliance auditing by itself
- analytics-based conversion diagnosis
- a full redesign with no existing interface to inspect
It is a UX-focused evaluator, not a replacement for specialized audits.
Can critique compare multiple design options?
Yes. It should work well for side-by-side review if you ask for comparative scoring and tradeoffs. Give the same task and audience context for each option so the comparison stays fair.
How to Improve critique skill
Give the model the interface goal, not just the screen
The single best way to improve critique results is to explain what the interface is trying to accomplish. The repository explicitly asks for this. A beautiful screen can still fail if the main task is unclear, and the skill is designed to catch that.
Ask for severity, evidence, and fixes
If you want output that leads to action, ask the critique skill to format findings as:
- issue
- why it matters
- evidence in the UI
- severity
- recommended fix
This prevents fluffy commentary and makes reviews easier to prioritize.
Choose personas that match the real audience
Persona testing becomes much stronger when you select only the relevant archetypes. For example:
- first-time user for onboarding
- impatient power user for dense dashboards
- anxious user for financial or destructive flows
Overusing every persona can dilute the critique.
Improve weak prompts with concrete constraints
A stronger critique guide prompt includes constraints such as:
- mobile-only
- brand cannot change colors
- must keep current information architecture
- engineering team can only make low-effort fixes this sprint
Constraints force more realistic recommendations.
Watch for the main failure mode
The main failure mode is broad, stylish feedback that does not connect to actual user tasks. If the first output sounds generic, ask follow-ups like:
- “Which issue most likely blocks task completion?”
- “What would confuse a first-time user in the first 10 seconds?”
- “Which recommendation has the highest impact with lowest implementation effort?”
Use heuristic scoring carefully
Scores are useful for comparison and prioritization, but they can create false precision. Ask for short evidence under each score. That keeps the critique skill anchored in visible UI problems instead of arbitrary numbers.
Run critique in two passes
A high-quality workflow is:
- first pass: diagnose issues
- second pass: refine solutions under real constraints
Separating diagnosis from redesign improves clarity and usually produces more trustworthy recommendations.
Improve outputs after the first critique
After the first run, feed back:
- corrected assumptions about users
- screenshots of revised states
- constraints the model ignored
- which findings your team agrees or disagrees with
The critique skill gets better when treated as an iterative reviewer, not a one-shot judge.
Use it where it has the strongest edge
This critique skill is most valuable on interfaces that look polished but may hide UX problems: AI-generated landing pages, dashboards, onboarding flows, settings panels, and dense feature surfaces. That is where its anti-pattern detection and cognitive load framing add the most information gain.
Know the tradeoff before adopting
The tradeoff is simple: critique gives more rigorous UX feedback than ordinary prompting, but only if you supply context and accept its opinionated framework. If you want a lightweight, ad hoc opinion, a normal prompt may be faster. If you want a repeatable critique for UX Audit, this skill is the better fit.
