audit
by pbakausThe audit skill runs structured technical UI reviews across accessibility, performance, theming, responsive behavior, and anti-patterns. It returns scored findings, P0-P3 severity rankings, and an action plan for a specific page, feature, or component. Best used after design context is gathered.
This skill scores 68/100, which means it is acceptable to list for directory users who want a reusable technical audit workflow, but they should expect some setup dependency and execution guesswork. The repository gives a real multi-step audit rubric with scoring, severity levels, and an actionable report format, yet it relies on other skills and provides little concrete operational scaffolding beyond the written checklist.
- Strong triggerability: the frontmatter clearly says to use it for accessibility checks, performance audits, or technical quality reviews.
- Substantive workflow content: the skill defines a systematic audit across five dimensions and produces scored findings with P0-P3 severity ratings and an action plan.
- Useful agent leverage over a generic prompt: it constrains the task to measurable implementation issues and explicitly says to audit rather than fix.
- Adoption depends on other skills: it mandates invoking /frontend-design and possibly /teach-impeccable before proceeding.
- Limited operational evidence: there are no support files, examples, commands, or repo-specific references to reduce execution ambiguity.
Overview of audit skill
What audit does
The audit skill runs a structured technical UI review for a page, feature, or component and returns a scored report instead of loose observations. It focuses on measurable implementation quality across accessibility, performance, theming, responsive behavior, and frontend anti-patterns, then ranks findings by P0 to P3 severity with an action plan.
Who should install this audit skill
This audit skill is best for frontend teams, design engineers, UX engineers, and product builders who want a repeatable UX Audit workflow without manually inventing criteria every time. It is especially useful when you need a code-aware review, not a subjective design critique.
The real job-to-be-done
Most users do not just want “feedback.” They want to answer questions like: Is this page shippable? What is broken first? Which issues are accessibility blockers versus cleanup? What should another agent or engineer fix next? audit is built for that triage job.
Why this skill is different from a generic prompt
A normal prompt might produce broad advice. audit is more decision-friendly because it:
- enforces a multi-area diagnostic scan
- uses explicit scoring across five dimensions
- separates issue discovery from issue fixing
- outputs prioritization with
P0-P3severity - expects implementation evidence rather than taste-based critique
Important dependency before adoption
The biggest adoption blocker is context: this skill requires design context gathering first. Its own instructions say to invoke /frontend-design, and if no design context exists yet, to run /teach-impeccable before the audit. If you skip that, output quality and consistency will drop.
How to Use audit skill
Install context for audit
The repository does not expose a dedicated install command inside SKILL.md, so use your normal skill installation flow for GitHub-hosted Claude skills. For example:
npx skills add https://github.com/pbakaus/impeccable --skill audit
After install, verify the skill is available as audit and note that it is marked user-invocable: true, so you can call it directly.
Read this file first
Start with .claude/skills/audit/SKILL.md. In this repository, that file contains nearly all of the usable logic: prerequisites, scope, dimensions, scoring model, and output expectations. There are no supporting rules/, resources/, or helper scripts to lean on, so your success depends on reading the skill file carefully.
Understand the prerequisite workflow
Before using the audit skill, do this in order:
- Gather design and product context with
/frontend-design. - If that context does not exist yet, run
/teach-impeccable. - Only then run
auditon the target page, feature, or component.
This matters because the audit is technical but still needs context to judge anti-patterns, theming consistency, and implementation quality accurately.
Know what to pass as input
The skill exposes an argument hint of:
[area (feature, page, component...)]
Good inputs are specific audit targets such as:
checkout pagemobile navigation drawerpricing cards componentsettings form validation flow
Weak inputs like the app or the UI usually create shallow output because the audit scope becomes too broad.
What the audit skill checks
The audit workflow scans five dimensions:
- accessibility
- performance
- theming
- responsive design
- anti-patterns
It then scores each dimension from 0-4 and compiles a report. If you are doing an audit for UX Audit purposes, this structure is helpful because it converts broad UX quality concerns into implementation-backed findings.
What this skill does not do
audit is for diagnosis, not remediation. It is explicitly designed to document issues rather than fix them. Install it if you want a repeatable quality review. Do not install it expecting automatic code changes, refactors, or visual redesign proposals in the same step.
Turn a rough request into a strong audit prompt
A weak prompt:
Run audit on my homepage
A stronger prompt:
Run audit on the homepage hero and signup flow. Focus on keyboard access, semantic structure, responsive layout between 320px and 1440px, theme token consistency, and obvious performance risks. Return scores by dimension plus P0-P3 findings and a fix order.
Why this is better:
- defines scope
- names the user journey
- highlights likely risk areas
- asks for the skill’s native output format
Best workflow for audit usage
A practical audit usage flow is:
- choose one page or component
- provide product and design context first
- run
audit - review scores and severity
- convert
P0/P1findings into implementation tasks - rerun audit after fixes
This makes the skill useful as a gate in QA, release review, or design system cleanup.
What good output should look like
A useful audit result should include:
- per-dimension scores
- concrete implementation findings
- severity ranking from
P0toP3 - actionable next steps
- evidence tied to code or UI behavior
If the output is mostly generic best practices with little prioritization, the problem is usually weak context or too-large scope.
Repository-reading path for adopters
If you are evaluating whether to install this audit skill, the fastest reading path is:
- frontmatter in
SKILL.mdfor invocation and purpose MANDATORY PREPARATIONDiagnostic Scan- each scoring section
- final reporting structure
That path tells you quickly whether the skill fits your workflow better than a generic audit prompt.
Practical tips that improve audit quality
- audit one area at a time
- name the device ranges or layout states that matter
- mention whether the UI uses a design system or theme tokens
- specify critical flows such as sign-in, checkout, or forms
- ask for evidence-backed findings only
- request no fixes if you want pure triage, or ask for a separate remediation step afterward
audit skill FAQ
Is audit a good fit for a UX Audit?
Yes, if your UX Audit needs implementation-level evidence. audit for UX Audit is strongest when you care about accessibility gaps, responsive breakage, theme inconsistency, and frontend quality issues that affect user experience. It is weaker for brand strategy, information architecture, or qualitative usability research.
How is this different from asking an AI to review a page?
A generic review may mix taste, product advice, and code guesses. The audit skill is narrower and more reliable for technical quality review because it uses defined dimensions, scoring, and severity. That structure makes the output easier to hand off to engineering.
Is this audit skill beginner-friendly?
Moderately. The workflow is simple, but the prerequisite context step is easy to miss. Beginners can use it, but they will get better results if they understand basic frontend concepts like WCAG issues, semantic HTML, responsive behavior, and design tokens.
When should I not use audit?
Do not use audit when you need:
- user research synthesis
- visual brand critique
- conversion-copy review
- direct code fixes in the same step
- a full-app audit with no clear target
In those cases, another skill or a narrower prompt is usually better.
Does audit require access to code?
It is best when the agent can inspect implementation, because the skill is framed as a code-level audit. It can still reason from rendered UI descriptions, but confidence and specificity will be lower.
Is audit enough by itself for release sign-off?
Usually not. It is a strong technical review layer, but not a substitute for runtime testing, browser/device checks, analytics review, or human QA. Treat it as a structured audit pass, not the only quality gate.
How to Improve audit skill
Give narrower scope for better audit results
The most common failure mode is over-broad scope. Asking for an audit of an entire product tends to flatten priority and reduce evidence quality. Better: audit one flow, one page, or one component family at a time.
Provide context before running audit
Because the skill requires /frontend-design and sometimes /teach-impeccable, the easiest way to improve results is to satisfy that dependency fully. Share:
- target users
- primary task on the page
- expected responsive breakpoints
- design system rules
- known constraints or intentional tradeoffs
Ask for evidence, not opinions
If the first output feels vague, tighten the next prompt:
Cite the element or pattern causing each issueSeparate verified implementation issues from inferred risksDo not include subjective visual preferences
This keeps the audit grounded and easier to trust.
Improve the severity ranking
Not all findings deserve equal attention. To make P0-P3 more useful, tell the skill what counts as severe in your context, such as:
- legal or WCAG exposure
- task completion blockers
- mobile-only breakage
- regressions in shared components
- issues affecting checkout or auth flows
Use a two-pass audit workflow
A high-quality pattern is:
- first pass: broad diagnostic scan
- second pass: deep dive into the lowest-scoring dimension
For example, if accessibility scores worst, rerun the audit focused only on keyboard flow, semantics, forms, and contrast. This usually gives more actionable remediation planning than one giant pass.
Pair audit with a follow-up fixing step
Since audit does not fix problems, improvement often comes from chaining workflows:
- run
audit - extract
P0/P1issues - assign each to a repair prompt, engineer, or code-editing agent
- rerun audit after changes
This turns the audit skill from a reporting tool into a quality loop.
Strengthen inputs for responsive and theming checks
If responsive or theming quality matters, say so explicitly. Good additions include:
Check behavior at 320px, 768px, and 1440pxCheck dark mode and token consistencyFlag hard-coded colors, spacing drift, and component state inconsistencies
Without that specificity, the audit may mention these areas but not examine them deeply.
Calibrate audit output for handoff
If the report will be used by engineers, ask for:
- issue title
- severity
- affected area
- why it matters
- suggested fix direction
- validation method after fix
That format improves adoption because the audit output becomes backlog-ready instead of just informative.
Common signs the first audit run was weak
Rerun the audit if you see:
- high-level advice without examples
- no scoring by dimension
- no
P0-P3prioritization - findings that read like design critique rather than technical review
- no mention of the target area you provided
Those are usually prompt or context problems, not proof that the skill is bad.
Best way to iterate after the first report
After the first audit, do not simply ask anything else? Instead, choose one of these:
Expand only the P0 and P1 issuesRe-audit the form flow for accessibility onlyConvert findings into an engineering checklistChallenge the performance score with stronger evidenceRerun audit after fixes and compare score changes
That kind of iteration gets much more value from the audit skill than repeating the same broad request.
