P

critique

by pbakaus

The critique skill helps review pages, flows, and components through a structured UX audit workflow. It checks AI-slop signals, hierarchy, information architecture, cognitive load, heuristics, and persona-based friction, then turns findings into actionable feedback. Best used with frontend-design and teach-impeccable context.

Stars14.6k
Favorites0
Comments0
AddedMar 30, 2026
CategoryUX Audit
Install Command
npx skills add pbakaus/impeccable --skill critique
Curation Score

This skill scores 78/100, which means it is a solid directory listing candidate for agents that need structured UX critique rather than a generic 'review this design' prompt. Repository evidence shows a substantive workflow with explicit trigger language, quantitative scoring references, persona-based testing, and concrete critique dimensions, though setup depends on other skills and the install/run path is not fully self-contained.

78/100
Strengths
  • Strong triggerability: frontmatter clearly says to use it when asked to review, critique, evaluate, or give feedback on a design or component.
  • Meaningful agent leverage: SKILL.md defines a multi-phase UX critique workflow with specific dimensions, scoring, and persona-based evaluation instead of a vague critique prompt.
  • Good supporting evidence: reference files for cognitive load, heuristics scoring, and personas make the critique more repeatable and actionable.
Cautions
  • Operational dependency risk: the skill requires invoking /frontend-design and possibly /teach-impeccable first, so it is not fully standalone.
  • Adoption clarity is only moderate: no install command and limited practical execution scaffolding may leave some users guessing about setup in a new environment.
Overview

Overview of critique skill

What the critique skill does

The critique skill is a structured UX and product design review workflow for evaluating a page, feature, or component as an actual user experience, not just as a visual mockup. It pushes the model to assess visual hierarchy, information architecture, emotional tone, cognitive load, usability heuristics, and persona-specific friction, then turn that into actionable feedback.

Who should install critique

This critique skill is best for product designers, frontend engineers, founders, and AI builders who already have a screen, prototype, or shipped interface and want a sharper review than a generic “give feedback on this UI” prompt. It is especially useful when you need critique for UX Audit work, design QA before launch, or a second opinion on whether an interface feels generic, confusing, or heavy.

The real job to be done

Most users do not just want opinions. They want to know:

  • what is hurting the experience first
  • whether the UI feels derivative or “AI-generated”
  • which issues are cosmetic versus conversion-blocking
  • how to prioritize fixes without a full design team

This skill is built around that job. It frames critique as a decision tool, not a style commentary.

What makes this critique different

The strongest differentiator is its insistence on context and its “AI slop detection” step. Instead of jumping straight into surface-level feedback, it expects design context first and explicitly checks whether the interface shows common 2024–2025 AI-product patterns like generic card grids, glow-heavy dark themes, weak hierarchy, and template-looking composition.

It also goes beyond a single reviewer voice by combining:

  • design-director style critique
  • cognitive load analysis
  • heuristic scoring
  • persona-based testing

Important adoption caveat

critique is not fully standalone. The repository makes frontend-design a mandatory dependency for principles and context gathering, and says to run teach-impeccable first if design context does not already exist. If you want a zero-setup critique install, this dependency chain is the main thing to know before adopting.

How to Use critique skill

Install context before you rely on critique

This repository places critique under .claude/skills/critique, and the skill text explicitly depends on /frontend-design. In practice, critique usage works best when the broader impeccable skill set is installed, not when this folder is treated as an isolated prompt file.

If your skill runner supports GitHub skill installs, install from the repository and then confirm that critique, frontend-design, and teach-impeccable are all available.

Read these files first

For a fast install decision, read:

  • SKILL.md
  • reference/cognitive-load.md
  • reference/heuristics-scoring.md
  • reference/personas.md

That path tells you almost everything important: prerequisites, review workflow, scoring model, and the lens used for user testing.

What input the critique skill needs

The critique skill performs much better when you provide:

  • the interface artifact: screenshot, mockup, URL, or component description
  • the area under review: page, flow, modal, dashboard, onboarding, settings, etc.
  • the primary user goal
  • the product context and audience
  • constraints: mobile/desktop, B2B/B2C, accessibility, conversion, technical limits

Without that context, the model can still comment on layout and aesthetics, but it cannot judge whether the interface is appropriate for the task.

Turn a rough ask into a strong critique prompt

Weak prompt:

  • “Critique this UI.”

Stronger critique usage:

  • “Use the critique skill on this onboarding flow. The product helps finance teams close books faster. Primary goal: get a first report generated in under 5 minutes. Audience: mid-market accounting teams. Constraint: desktop web app, dense data is acceptable but first-time clarity matters. Please evaluate AI-slop signals, hierarchy, cognitive load, heuristic score, and test it as a first-timer and power user.”

The stronger version works better because it gives the skill something to optimize for, not just something to react to.

Follow the repository's required preparation

The skill is explicit: run /frontend-design first and follow its context gathering protocol. If no design context exists, run /teach-impeccable before critique. That means the intended workflow is:

  1. gather design context
  2. understand what the interface is trying to accomplish
  3. run critique against that goal
  4. return prioritized feedback

If you skip step 2, the output often becomes generic because the model cannot distinguish intentional density from poor design.

Use critique for UX Audit work

For critique for UX Audit use cases, do not ask only for “feedback.” Ask for:

  • top issues by severity
  • heuristic scoring summary
  • likely user drop-off points
  • persona-specific failure modes
  • concrete redesign recommendations

This pushes the result from commentary into audit-grade output that stakeholders can act on.

What the workflow is really checking

Based on the repository, the critique skill is strongest when used to inspect:

  • AI-generated design sameness
  • visual hierarchy problems
  • cognitive overload
  • weak information architecture
  • unclear interaction patterns
  • usability heuristic gaps
  • mismatch between interface tone and user needs

That makes it a better fit for evaluating shipped or realistic UI than for brainstorming greenfield concepts.

Suggested critique usage workflow

A practical workflow:

  1. collect the screen and goal
  2. state user type and success criteria
  3. invoke critique on a specific area, not the whole product at once
  4. review the top 3 severe issues first
  5. ask for revised recommendations after clarifying constraints
  6. repeat for the next flow

Using the critique skill page-by-page or flow-by-flow usually gives higher signal than asking for a giant whole-product review in one pass.

How to scope the request well

Good scopes:

  • signup flow
  • pricing page
  • analytics dashboard
  • settings panel
  • empty state
  • mobile checkout

Poor scopes:

  • “the whole app”
  • “our design system”
  • “everything on the website”

The skill is detailed enough that oversized scope causes shallow output. Narrowing the review area improves specificity and prioritization.

Practical tips that improve output quality

To get better critique usage, include:

  • one sentence on business goal
  • one sentence on user urgency
  • one sentence on what success looks like
  • any known constraints you do not want the model to “fix away”

Example:

  • “This page exists to get a team admin to invite coworkers immediately after signup. Speed matters more than education. We cannot remove required compliance messaging.”

That kind of input helps the skill separate real flaws from necessary complexity.

critique skill FAQ

Is critique better than a normal UX prompt?

Usually yes, if you want a repeatable review method. The value of the critique skill is not magic design taste; it is the built-in structure: prerequisite context, anti-pattern detection, heuristic scoring, cognitive load framing, and persona testing. A normal prompt may give decent opinions, but it is less consistent and easier to steer into generic praise.

Is critique beginner-friendly?

Mostly yes, but with one catch: the dependency on frontend-design and sometimes teach-impeccable. Beginners can still use critique, but they should expect to spend a few minutes understanding the intended workflow instead of dropping in a single prompt with no setup.

When is critique a bad fit?

Skip this critique skill when:

  • you need code generation more than design review
  • you only have a vague product idea, not an interface
  • you want brand strategy or copywriting first
  • you cannot provide any user or product context

It can still comment on visuals alone, but that is not where the skill is most differentiated.

Does critique only work for polished UI?

No. It can be useful on wireframes, rough mocks, and early components, especially for hierarchy and cognitive load. But persona testing and heuristic scoring become more credible when the interaction model is visible enough to assess.

Can I use critique for a single component?

Yes, if the component has a real job in context. A filter panel, modal, table, or form can all benefit from critique usage. Just explain where it appears, who uses it, and what they are trying to complete.

What should I expect as output?

A good critique output should give:

  • the main UX risks
  • severity or prioritization
  • specific reasons those issues matter
  • examples of what to change
  • a clear distinction between superficial polish and structural UX problems

If the result is mostly adjectives, the prompt likely lacked context.

How to Improve critique skill

Give critique a success metric, not just a screen

The fastest way to improve critique output is to state the intended user outcome. “Review this dashboard” is weaker than “Review this dashboard for whether a new manager can spot blockers in under 30 seconds.” Success metrics sharpen every later judgment.

Provide the audience and product maturity

The same interface may be right for:

  • expert operators in a dense B2B tool
  • wrong for first-time consumers
  • acceptable in internal tooling
  • weak for a premium customer-facing product

If you name the audience and maturity level, the critique skill can judge tradeoffs instead of defaulting to mainstream UX advice.

Ask for persona selection explicitly

The repository includes several personas, but not every persona is useful every time. Improve critique for UX Audit tasks by saying which user types matter most, such as:

  • first-timer
  • power user
  • cautious admin

This prevents the output from spreading attention across irrelevant failure modes.

Force prioritization after the first pass

A common failure mode is a long list of observations with no decision value. After the first critique, ask:

  • “Which 3 issues most threaten task completion?”
  • “Which issue is most likely to reduce trust?”
  • “What should be fixed before launch versus later?”

This turns analysis into an action plan.

Supply constraints the model should respect

If the interface must remain:

  • data-dense
  • enterprise-looking
  • compliant
  • on-brand
  • mobile-first
  • low-engineering-effort

say so directly. Otherwise, critique may suggest cleaner but unrealistic redesigns.

Watch for generic “AI slop” over-correction

One strength of this critique skill is detecting generic AI-made patterns. But users should not overreact and remove every modern convention just to feel unique. The better question is whether the design is distinctive and appropriate, not merely different. Use that section to identify lazy sameness, then validate fixes against usability.

Improve inputs with before-and-after iteration

Best practice:

  1. run critique on the current design
  2. apply or simulate 2–3 major changes
  3. run critique again on the revised version
  4. compare whether the main risks actually dropped

The skill becomes much more useful when used as an iterative design loop rather than a one-time verdict.

Common reasons critique output feels weak

Usually one of these is missing:

  • no user goal
  • no product context
  • too much scope
  • no constraints
  • no artifact quality good enough to inspect
  • asking for “thoughts” instead of a defined review structure

When those are fixed, the critique guide becomes much more actionable.

A strong prompt template for better critique usage

Use a prompt like this:

  • “Use the critique skill on [area].
  • Product: [what the product does]
  • Audience: [who this is for]
  • Primary task: [what the user needs to do]
  • Success metric: [what success looks like]
  • Constraints: [platform, compliance, technical, brand]
  • Review for: AI-slop signals, hierarchy, cognitive load, heuristics, and 2 relevant personas.
  • Output: top issues, severity, why they matter, and concrete fixes.”

That template aligns closely with how the repository wants critique to operate and usually produces better signal than an open-ended request.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...