wcag-audit-patterns
by wshobsonwcag-audit-patterns is a structured WCAG 2.2 audit skill for accessibility reviews. Use it to combine automated findings with manual checks, prioritize issues by severity and conformance level, and generate actionable remediation guidance for pages, flows, and components.
This skill scores 72/100, which means it is list-worthy for directory users who want a substantial WCAG 2.2 audit reference, but they should expect a documentation-heavy guide rather than a tightly operationalized skill with tooling or executable workflow support. The repository gives enough clarity on scope and likely use cases to justify installation, yet users will still need to supply their own audit process, tools, and evidence format.
- Clear triggerability: the description and 'When to Use This Skill' section explicitly cover audits, WCAG fixes, accessible components, and compliance prep.
- Substantial real content: the long SKILL.md includes WCAG 2.2 concepts, violation categories, code fences, and remediation-oriented guidance rather than a thin placeholder.
- Good agent leverage as a reference: it appears to consolidate automated testing, manual verification, and remediation patterns in one place, reducing generic-prompt guesswork during accessibility reviews.
- Operational clarity is limited by packaging: there are no support files, scripts, references, or install instructions, so execution depends on the agent/user already knowing what tools and workflow to use.
- Trust and adoption signals are moderate: the structural scan flags a placeholder marker, and the excerpt shows guidance-heavy prose without linked standards references or reusable artifacts.
Overview of wcag-audit-patterns skill
What the wcag-audit-patterns skill does
The wcag-audit-patterns skill is a structured prompt framework for running WCAG 2.2 accessibility audits and turning findings into remediation guidance. It is best for people doing UX audit, QA, design review, engineering handoff, or compliance preparation who want more than a generic “check accessibility” prompt.
Who should use it
Use wcag-audit-patterns if you need to:
- review a page, flow, or component against WCAG 2.2
- combine automated findings with manual verification
- prioritize issues by impact and conformance level
- produce fix guidance developers and designers can actually act on
It fits accessibility specialists, UX auditors, product designers, frontend engineers, and teams preparing for ADA, Section 508, or VPAT-related work.
The real job-to-be-done
Most users do not need a theory lesson on WCAG. They need a repeatable way to answer:
- what is likely broken
- what must be manually checked
- what severity to assign
- how to fix it without vague advice
That is where the wcag-audit-patterns skill is useful: it gives an audit-oriented structure centered on WCAG levels, POUR principles, common violation patterns, and remediation framing.
What makes it different from an ordinary prompt
A generic accessibility prompt usually produces broad advice. wcag-audit-patterns is more useful when you want the model to:
- inspect a page or feature through a known audit lens
- separate blocker issues from lower-severity issues
- map findings to recognizable WCAG problem categories
- suggest concrete remediation patterns, not just “improve accessibility”
What is in scope and what is not
This skill is strong on audit reasoning and issue pattern coverage. It is weaker as a turnkey toolchain because the repository provides only SKILL.md and no helper scripts, rules, or reference files.
That means wcag-audit-patterns for UX Audit is best used as a guided audit framework, not as a complete automated scanner or legal certification workflow.
How to Use wcag-audit-patterns skill
Install context for wcag-audit-patterns
The upstream skill does not publish its own install command inside SKILL.md, so use your normal skill-loading method for the wshobson/agents repository. If your environment supports Skills CLI, the common pattern is:
npx skills add https://github.com/wshobson/agents --skill wcag-audit-patterns
If your setup loads skills from a local clone, point it to:
plugins/accessibility-compliance/skills/wcag-audit-patterns
Read this file first
Start with:
plugins/accessibility-compliance/skills/wcag-audit-patterns/SKILL.md
There are no support folders or reference documents in this skill path, so nearly all usable guidance is in that single file. This matters for adoption: you can evaluate it quickly, but you should not expect hidden implementation detail elsewhere.
What input the skill needs to work well
The wcag-audit-patterns usage quality depends heavily on input quality. Give it:
- the URL, screen, or component being audited
- page purpose and main user tasks
- target conformance level, usually
WCAG 2.2 AA - device or viewport context
- known stack details if relevant, such as React, custom widgets, modal system, or design system components
- evidence sources, such as screenshots, HTML snippets, audit logs, axe output, or user flow steps
Without these, the model will default to common patterns and may miss context-specific failures.
Best prompt shape for a real audit
A strong invocation is not “audit this site for accessibility.” Better is:
- identify the page or feature
- state the standard and level
- ask for automated-check candidates and manual checks separately
- request issue prioritization
- ask for remediation steps tied to each finding
Example prompt structure:
“Use wcag-audit-patterns to audit our checkout page against WCAG 2.2 AA. Focus on keyboard access, form labels, error handling, focus order, and color contrast. I’ve attached screenshots plus the HTML for the payment section. Separate likely issues from items requiring manual verification. For each issue, provide severity, likely WCAG area, user impact, and a concrete fix.”
Turn a rough goal into a complete prompt
If your rough goal is “check our modal,” expand it into:
- what the modal is for
- how it opens and closes
- whether focus is trapped
- whether it contains forms, tables, or custom controls
- whether mobile and desktop behavior differs
This improves output because many serious WCAG issues depend on interaction flow, not just static markup.
Suggested workflow for wcag-audit-patterns for UX Audit
A practical workflow is:
- Ask for a pre-audit checklist based on the page type.
- Run your automated scanner separately if available.
- Feed the output, screenshots, and code snippets into the skill.
- Ask for manual verification steps for items automation cannot confirm.
- Ask for a remediation plan grouped by blocker, serious, and moderate issues.
- Re-run the skill on revised code or updated screenshots.
This workflow gets more value from the skill than a single-pass prompt.
What the skill is especially good at
The source content strongly emphasizes:
- WCAG conformance levels
- the POUR framework
- common violations by impact
- remediation-oriented audit output
That makes it particularly helpful for:
- first-pass audit structuring
- prioritizing common accessibility defects
- generating developer-ready fix guidance
- reviewing interactive UI patterns like forms and custom widgets
What it will not do for you automatically
The skill does not include:
- browser automation
- code analyzers
- reusable checklists in separate files
- legal sign-off logic
- product-specific decision rules
So wcag-audit-patterns install is simple, but you still need your own scanner, testing process, or human review if you want high-confidence compliance work.
Common high-value inputs
The most useful artifacts to provide are:
- axe, Lighthouse, or similar scan output
- DOM snippets for problematic controls
- screenshots with focus state visible
- keyboard interaction steps
- form validation behavior
- video or notes for dynamic UI states like menus, dialogs, and carousels
These inputs help the skill distinguish between likely violations and verification-only concerns.
Practical prompt patterns that improve output
Ask for one of these formats:
- “audit findings table with severity, impact, fix”
- “manual verification checklist by component”
- “top 10 blockers before release”
- “developer remediation tasks with acceptance criteria”
- “design review notes for WCAG 2.2 AA”
These output shapes are more actionable than open-ended summaries.
wcag-audit-patterns skill FAQ
Is wcag-audit-patterns good for beginners
Yes, if you already know the page or product you are reviewing. The skill gives useful structure around WCAG 2.2, conformance levels, and common issue categories. It is not a full accessibility course, so beginners may still need outside references for edge cases and formal interpretations.
Is this better than a normal accessibility prompt
Usually yes for audit work. The main value of wcag-audit-patterns guide is not hidden data; it is the audit framing. It helps the model produce findings in a more systematic way, especially when you ask for severity, manual checks, and remediation.
Does it replace automated scanners
No. It complements them. Automated tools catch only part of WCAG coverage, while this skill is better at structuring the broader review and suggesting manual checks for keyboard use, semantics, labels, focus management, and interaction quality.
Is it suitable for legal or procurement compliance work
It can support preparation, especially for ADA, Section 508, or VPAT-related review, but it should not be treated as legal certification. Use it to organize evidence and remediation, not as the sole basis for compliance claims.
When should I not use wcag-audit-patterns
Skip it if you need:
- a code-level linter or CI integration
- formal legal interpretation
- a complete accessibility knowledge base in the repo
- specialized native app guidance outside web WCAG audit patterns
It is most effective for web-focused audit reasoning, not end-to-end compliance automation.
Does it work for components, not just full pages
Yes. It is often more useful on components because you can provide tighter evidence: markup, interaction sequence, screenshots, and expected behavior. Good component candidates include modals, tabs, menus, forms, tables, and custom controls.
How to Improve wcag-audit-patterns skill
Give narrower audit targets
The biggest improvement lever is scope control. Instead of “audit our dashboard,” ask for:
- one page template
- one journey such as sign-up or checkout
- one component family such as date pickers or dialogs
Narrower prompts produce more reliable findings and remediation.
Provide evidence, not just descriptions
wcag-audit-patterns performs better when you attach evidence. Strong inputs include:
- HTML for the affected region
- screenshots showing visible labels and focus states
- scanner output with rule names
- notes on keyboard behavior
- screen reader observations if available
Evidence reduces guesswork and improves fix specificity.
Ask for manual checks explicitly
A common failure mode is treating the first output as complete. Many important WCAG issues require human verification. Ask the skill to separate:
- likely detectable issues
- assumptions
- manual checks still required
This makes the result more trustworthy.
Request remediation with acceptance criteria
Do not stop at “how to fix.” Ask for:
- the implementation change
- why it matters to users
- acceptance criteria for QA
- any regressions to watch for
This turns the output into something a designer, engineer, and QA reviewer can all use.
Improve prioritization quality
If everything comes back as equally important, ask the skill to re-rank by:
- user impact
- task blockage
- legal/compliance risk
- ease of remediation
- frequency across templates
This is especially useful when using wcag-audit-patterns skill in a backlog or release triage workflow.
Re-run after fixes with before-and-after context
The second pass is where this skill becomes more valuable. Provide:
- original finding
- revised markup or screenshot
- what changed
- what remains uncertain
Then ask whether the fix resolves the issue fully or introduces new accessibility risks.
Combine with your own standards
If your team has design system rules, coding standards, or accessibility definitions of done, include them in the prompt. The repository itself is lightweight, so adding local standards is the best way to make wcag-audit-patterns usage feel tailored rather than generic.
Watch for overconfident outputs
The skill is helpful, but it can still overstate certainty, especially without code or interactive context. If a finding depends on runtime behavior, ask the model to mark confidence level and note what must be verified in-browser or with assistive tech.
Use it to create repeatable audit templates
One of the best ways to improve wcag-audit-patterns in practice is to turn successful prompts into internal templates:
- page audit template
- component audit template
- remediation handoff template
- regression verification template
That gives you consistency even though the repo itself does not include extra support files.
