audit
by pbakausThe audit skill runs a technical UX Audit on frontend work, checking accessibility, performance, theming, responsive behavior, and anti-patterns. It produces scored findings, P0-P3 severity labels, and an action plan, with required setup through related impeccable skills first.
This skill scores 76/100, which makes it a solid directory listing candidate for users who want a structured frontend quality audit rather than a generic review prompt. The repository gives clear intent, scope, and output expectations around accessibility, performance, theming, responsive design, and anti-pattern scoring, but adoption still requires some guesswork because execution depends on other skills and there are no concrete examples or supporting resources.
- Strong triggerability: the description clearly targets accessibility checks, performance audits, and technical quality reviews.
- Useful operational structure: it defines a 5-dimension diagnostic scan with 0-4 scoring and P0-P3 severity reporting.
- Good agent leverage: it explicitly tells the agent to audit rather than fix, which makes the workflow reusable as a handoff step.
- Dependency risk: it requires invoking $frontend-design and possibly $teach-impeccable before proceeding.
- Limited concrete execution help: no install command, examples, scripts, or referenced files reduce confidence for first-time adopters.
Overview of audit skill
What the audit skill does
The audit skill runs a technical UX Audit on implemented frontend work and turns findings into a structured report. It checks measurable quality across accessibility, performance, theming, responsive behavior, and implementation anti-patterns, then scores each area and labels issues by severity from P0 to P3.
Who should install audit
This audit skill is best for frontend engineers, design engineers, UX engineers, and AI agents reviewing a page, component, or feature before release. It is especially useful when you want a repeatable audit instead of a vague “review this UI” prompt.
The real job-to-be-done
Most users do not need general feedback. They need an audit that:
- focuses on code-backed issues
- separates critical defects from polish
- avoids fixing things prematurely
- leaves a handoff-ready report for later implementation work
That is the core value here: a technical quality review you can run before asking another skill or agent to make changes.
What makes this audit different from a generic prompt
The main differentiator is scope discipline. The skill explicitly treats audit as a technical review, not a visual taste critique. It asks for a diagnostic scan across five dimensions, uses a consistent scoring model, and expects severity-based reporting with an action plan. That makes outputs easier to compare across pages and easier to turn into follow-up tasks.
Key adoption caveat
This skill depends on prior context. Its own instructions require invoking $frontend-design first and, if design context is still missing, running $teach-impeccable before the audit. If you skip that preparation, output quality will drop because the audit relies on shared design principles and context-gathering rules.
How to Use audit skill
audit install and setup context
Install the audit skill from the pbakaus/impeccable repository in your skills environment:
npx skills add pbakaus/impeccable --skill audit
Because this skill lives under .codex/skills/audit, the practical install decision is less about dependencies and more about workflow fit. You should expect to use it inside an environment that supports skill invocation and related skills from the same repository.
Read this file first
Start with:
SKILL.md
That file contains nearly all of the behavior that matters: prerequisites, audit scope, scoring, and expected reporting style. There are no visible helper scripts or reference files in this skill folder, so most of the implementation guidance is in the main skill document itself.
Mandatory prerequisite before running audit
Do not call audit cold. The skill says to invoke $frontend-design first because it contains the design principles, anti-patterns, and the context-gathering protocol used by this audit. If no design context exists yet, run $teach-impeccable before the audit.
In practice, the sequence is:
- gather design and product context
- establish what page or component is being reviewed
- run
audit - use the report to drive fixes with another task or skill
What input the audit skill needs
The audit skill works best when you give it a concrete target plus review context. Strong inputs usually include:
- the exact page, route, component, or flow
- the code location or files involved
- intended device targets
- framework or stack details if relevant
- known constraints such as legacy CSS, design system limits, or performance budgets
- whether the review is pre-release, regression-focused, or exploratory
A weak request is “audit my app.” A strong request is “run an audit for the checkout page on mobile and desktop, focusing on accessibility, loading behavior, and responsive breakpoints.”
Turn a rough goal into a usable audit prompt
A good audit usage prompt should name the target, define the boundary, and ask for structured output. For example:
- “Run the
auditskill on the pricing page. Review accessibility, performance, theming consistency, responsive behavior, and implementation anti-patterns. Score each dimension 0-4, listP0-P3issues, and end with an action plan. Do not fix anything yet.” - “Use
auditfor the settings modal component. Check keyboard support, semantic structure, focus handling, contrast, theme token usage, and mobile layout failure points.”
This works better than a generic review prompt because it matches the skill’s reporting model.
What the audit actually checks
Based on the skill instructions, the audit covers five dimensions:
- accessibility
- performance
- theming
- responsive design
- anti-patterns
The accessibility section is the most explicit in the source and includes contrast, ARIA, keyboard navigation, semantic HTML, alt text, and form issues. That tells you the skill is implementation-minded and likely to produce concrete defects rather than abstract advice.
Expected output format and why it matters
The value of this audit skill is not just the checklist. It is the output shape:
- dimension-by-dimension review
0-4scoring per dimensionP0-P3severity labels- actionable plan
That structure helps with triage. Teams can separate release blockers from backlog improvements without rereading the whole report.
Best workflow for audit usage
A practical workflow looks like this:
- prepare design context with the required prerequisite skills
- choose one page, feature, or component
- provide implementation scope and constraints
- run the audit skill
- review scores and severities
- convert the action plan into tickets or a follow-up fixing prompt
This skill is most effective when run on bounded surfaces. If you try to audit an entire product in one pass, findings become shallow and prioritization degrades.
When to use audit for UX Audit work
Use audit for UX Audit when you need implementation evidence for UX quality problems. It is a strong fit for:
- release readiness reviews
- regression checks after a redesign
- comparing technical quality across pages
- identifying accessibility and responsive defects before user testing
- generating a defect list for another agent to fix
It is less suited to pure research questions like information architecture, messaging clarity, or visual brand exploration.
Boundaries and misfit cases
This is not a design critique skill and not a fixing skill. It documents issues rather than resolving them. If your real goal is “make this page look better,” install it only if you also want a technical defect inventory. If your goal is “rewrite the component now,” this audit step may be unnecessary unless quality risk is high.
audit skill FAQ
Is this audit skill beginner-friendly?
Yes, if you already know what surface you want reviewed. The skill gives a clear audit frame, but beginners may miss the prerequisite context step. If you ignore $frontend-design and $teach-impeccable when needed, the audit can become generic or inconsistent.
Do I need the whole impeccable repository?
For this skill, the main dependency is conceptual rather than file-heavy. The visible audit folder only exposes SKILL.md, but the instructions explicitly rely on other skills in the same repository. So you likely want repository-level access, not just this one file in isolation.
How is audit better than asking an AI to review my UI?
A normal prompt often mixes subjective design taste with technical defects. This audit skill enforces narrower scope, consistent dimensions, and scored output. That usually produces better triage, better comparability across audits, and less wasted time debating vague comments.
Can audit fix problems automatically?
No. The skill is designed to diagnose and report. That is a feature, not a limitation, if you want a clean handoff between review and implementation. Use the report to drive a separate fixing task.
What should I audit first?
Start with one high-impact surface:
- homepage hero and nav
- signup or checkout flow
- dashboard entry screen
- shared components like modals, forms, and tables
These areas expose accessibility, responsive, and performance issues quickly, making the first audit more useful.
When should I not use this audit skill?
Skip this audit if:
- you only want subjective design ideas
- you have no concrete implementation to inspect
- you need full product research rather than technical review
- you plan to ship a fast prototype and do not need scored reporting
How to Improve audit skill
Give the audit a tighter target
The fastest way to improve audit output is to narrow scope. Ask for one route, one flow, or one component family. “Audit the account deletion flow” will produce stronger findings than “audit the whole app.”
Provide the context the skill expects
Because this audit depends on frontend design context, feed it the missing background up front:
- user goal of the screen
- expected interaction model
- device priorities
- theme or design system rules
- business constraints
This reduces false positives and helps the audit judge anti-patterns against actual intent.
Ask for evidence-backed findings only
If you want a stronger audit guide in practice, explicitly request observable evidence. For example, ask the agent to cite the element, pattern, state, or behavior behind each finding. That keeps the report closer to implementation reality and easier to verify.
Improve severity quality with release context
Severity labels get better when you define impact. Tell the audit whether the target is:
- public marketing page
- authenticated product UI
- checkout or conversion flow
- internal tool
- mobile-first experience
A keyboard trap in checkout should rank differently from a cosmetic spacing inconsistency in an admin screen.
Common failure modes in audit usage
The most common problems are:
- skipping the mandatory prerequisite skills
- auditing too much surface area at once
- asking for fixes instead of diagnosis
- providing no device or viewport context
- treating subjective design preferences as technical defects
These issues usually lead to noisier reports, weaker prioritization, or mixed scope.
Stronger inputs that improve output quality
Better prompts include specifics like:
- “focus on keyboard navigation and forms”
- “treat mobile Safari as a priority”
- “check theme token consistency in dark mode”
- “flag only measurable anti-patterns”
- “score each dimension and end with top 5 fixes by impact”
These details improve the audit by guiding where depth matters most.
How to iterate after the first audit
After the first pass, do not rerun the exact same broad prompt. Instead:
- fix or shortlist the highest-severity issues
- rerun audit on the same bounded surface
- request deeper checks on the weakest-scoring dimension
- compare score changes and unresolved
P0-P1findings
This turns the audit skill into a repeatable quality gate rather than a one-off report.
Pair audit with follow-up implementation work
The audit skill is strongest when used as the diagnosis stage in a two-step workflow. First, generate the report. Then use that report as structured input for a separate implementation pass. This preserves the audit’s objectivity and prevents “review while editing” from hiding important defects.
