W

screen-reader-testing

by wshobson

screen-reader-testing is a practical skill for UX audits and accessibility QA. Learn how to use it to test web apps with VoiceOver, NVDA, and JAWS, prioritize browser and platform coverage, and review forms, ARIA behavior, focus management, and dynamic announcements.

Stars32.5k
Favorites0
Comments0
AddedMar 30, 2026
CategoryUX Audit
Install Command
npx skills add https://github.com/wshobson/agents --skill screen-reader-testing
Curation Score

This skill scores 76/100, which means it is a solid directory listing candidate: users get a clearly scoped, substantial guide for when to invoke screen-reader testing, and an agent would likely perform better with it than from a generic accessibility prompt. The main limitation is that it is documentation-only, so adopters should expect to supply their own tooling setup and execution environment.

76/100
Strengths
  • Strong triggerability: the description and "When to Use" section clearly frame screen reader compatibility, ARIA validation, form accessibility, dynamic announcements, and navigation testing.
  • Substantial operational content: the skill includes major screen readers, testing priorities, modes, and extensive structured guidance across many sections rather than a thin placeholder.
  • Useful agent leverage: concrete coverage recommendations like NVDA + Firefox and VoiceOver + Safari give agents better default testing plans than a generic prompt would.
Cautions
  • No install command, scripts, references, or support files are provided, so execution depends on the user's own screen-reader setup and prior platform knowledge.
  • Repository signals show limited explicit workflow/constraint metadata, which may leave some edge-case decisions and environment assumptions implicit.
Overview

Overview of screen-reader-testing skill

What the screen-reader-testing skill does

The screen-reader-testing skill is a practical testing guide for checking how a web app behaves with real screen readers, not just with automated accessibility scanners. It is designed for UX audits, accessibility QA, ARIA validation, form testing, and debugging cases where a page looks correct visually but fails for assistive technology users.

Who should install it

This screen-reader-testing skill is best for:

  • UX auditors who need a repeatable manual accessibility workflow
  • Frontend engineers debugging keyboard and announcement issues
  • Designers validating interaction patterns before release
  • QA teams adding assistive technology checks to acceptance testing
  • Teams preparing for WCAG-focused reviews where automated checks are not enough

The real job-to-be-done

Most users do not need a generic accessibility lecture. They need a way to answer questions like:

  • Which screen reader and browser combinations matter first?
  • How do I test forms, dialogs, menus, and dynamic updates realistically?
  • What should I listen for while navigating?
  • How do I turn a vague “check accessibility” request into a focused UX audit?

The screen-reader-testing skill helps structure that manual testing work.

Why this skill is useful over a generic prompt

A generic prompt might list accessibility best practices. This skill is more useful when you need an execution-oriented screen-reader-testing guide with:

  • concrete platform coverage priorities
  • distinction between major screen readers such as VoiceOver, NVDA, JAWS, TalkBack, and Narrator
  • testing focus on reading mode vs interaction mode
  • practical use cases like forms, ARIA behavior, dynamic announcements, and navigation

What matters before you adopt it

The main value is decision support and workflow structure, not automation. This skill does not replace running the actual screen reader on the target platform. Install it if you want better test planning, better prompts for an agent, and fewer blind spots during a screen reader compatibility review.

How to Use screen-reader-testing skill

Install context for screen-reader-testing

Install the skill from the wshobson/agents repository into your skills-enabled environment:

npx skills add https://github.com/wshobson/agents --skill screen-reader-testing

If your agent environment uses a different skill loader, adapt the install step to that tool. The important part is pulling the screen-reader-testing skill from the plugins/accessibility-compliance/skills/screen-reader-testing path in the repository.

Read this file first

Start with:

  • SKILL.md

This repository slice only exposes SKILL.md, so the adoption decision is mostly about whether its testing framework matches your workflow. You are not getting helper scripts or reference files here, so expect to supply your own app context, target flows, and platform constraints.

What input the skill needs to work well

The screen-reader-testing skill performs much better when you provide:

  • the product type: marketing site, SaaS app, dashboard, checkout, form-heavy workflow
  • target user flow: sign in, search, checkout, create record, submit form
  • target platforms: Windows, macOS, iOS, Android
  • browser constraints: Safari, Firefox, Chrome, Edge
  • component types involved: modal, tabs, menu button, combobox, live region, data table
  • known issues or suspicion: missing labels, broken tab order, duplicate announcements, silent updates

Weak input:

  • “Test my site for screen readers.”

Strong input:

  • “Use the screen-reader-testing skill to review our signup flow for NVDA + Firefox and VoiceOver + Safari. Focus on field labels, error announcements, password requirements, focus movement after validation, and whether success feedback is announced.”

Choose platform coverage instead of testing everything

The skill gives a useful priority model. In practice, start with:

  1. NVDA + Firefox on Windows
  2. VoiceOver + Safari on macOS
  3. VoiceOver + Safari on iOS

Expand to JAWS + Chrome, TalkBack + Chrome, and Narrator + Edge only when the product risk, user base, or compliance requirement justifies broader coverage. This saves time and keeps a UX audit realistic.

Turn a rough goal into a better prompt

A good screen-reader-testing usage prompt should name:

  • the flow
  • the assistive tech pair
  • the interaction types
  • the expected output format

Example:

“Use the screen-reader-testing skill for a UX audit of our checkout flow. Prioritize NVDA + Firefox and VoiceOver + Safari. Test browse reading, form entry, validation errors, shipping method radio groups, promo code updates, and payment confirmation announcements. Return findings by severity, reproduction steps, expected screen reader behavior, and likely markup causes.”

That prompt is better because it defines scope, coverage, and reporting structure.

Use the skill for the right kinds of issues

This screen-reader-testing guide is especially well matched for:

  • ARIA implementation validation
  • form label and error behavior
  • dynamic content announcement checks
  • focus management reviews
  • navigation and landmark usability
  • testing whether custom widgets behave like native controls

It is less useful as a standalone tool for color contrast, visual layout review, or full legal compliance mapping unless you combine it with other accessibility checks.

Practical workflow for a screen-reader-testing for UX Audit

A strong workflow looks like this:

  1. Identify top user journeys.
  2. Pick the minimum screen reader coverage.
  3. Test reading order and page structure first.
  4. Test interactive controls next.
  5. Trigger all validation and dynamic update states.
  6. Record what is announced, skipped, duplicated, or confusing.
  7. Convert observations into code-facing remediation notes.

This order matters because many teams jump into component details before checking headings, landmarks, page titles, and reading flow.

What to listen for during testing

The skill is most effective when you actively capture:

  • whether headings create a meaningful outline
  • whether landmarks help orientation
  • whether links and buttons have distinct names
  • whether form fields expose labels, instructions, and errors
  • whether state changes are announced
  • whether focus lands where users expect after opening dialogs, submitting forms, or changing views

These observations produce more actionable findings than a simple pass/fail list.

Understand screen reader modes before testing widgets

The source material distinguishes between reading mode and interaction mode. That matters because many widgets appear “fine” while reading but break during actual use. Ask the agent to test both:

  • content discovery in browse or virtual mode
  • direct interaction in focus or forms mode

This is especially important for menus, comboboxes, modal dialogs, date pickers, and custom dropdowns.

Best way to use the output with engineers

Ask for findings in a format engineers can act on:

  • issue summary
  • affected screen reader and browser
  • exact reproduction path
  • observed announcement or behavior
  • expected behavior
  • likely technical cause such as missing name, wrong role, broken focus management, or absent live region

That turns the screen-reader-testing skill from a general guide into a debugging aid.

screen-reader-testing skill FAQ

Is screen-reader-testing enough for accessibility testing?

No. The screen-reader-testing skill covers an important manual testing layer, but it should sit alongside keyboard testing, semantic HTML review, automated checks, and design-level accessibility review. Use it when you specifically care about assistive technology experience.

Is this skill beginner-friendly?

Yes, with limits. It gives useful testing priorities and concepts, but it assumes you can access or simulate real testing on the relevant platforms. Beginners can use it to structure a review, but they may still need separate guidance on operating NVDA, VoiceOver, or JAWS efficiently.

When is screen-reader-testing a poor fit?

Skip this skill if your need is mainly:

  • automated linting
  • code scanning
  • non-web product accessibility
  • visual-only UX review
  • a full WCAG conformance matrix

In those cases, screen-reader-testing can support the process, but it should not be your only method.

How is this different from an ordinary accessibility prompt?

Ordinary prompts often produce broad advice. The screen-reader-testing install decision makes sense when you want a reusable testing frame centered on real screen reader behavior, coverage priority, and practical audit flow. It reduces guesswork about what to test first and which combinations matter most.

Can I use screen-reader-testing for a design review?

Yes, but only indirectly. It is strongest when applied to implemented interfaces or realistic prototypes where navigation order, labels, announcements, and state changes can be evaluated. For early design review, use it to pressure-test interaction patterns before development.

How to Improve screen-reader-testing skill

Give the skill narrower scope for better results

The fastest way to improve screen-reader-testing output quality is to reduce ambiguity. Ask it to review one flow, one platform set, and one class of issues at a time. “Audit our app” is too broad. “Test our account recovery flow for VoiceOver + Safari focusing on heading structure, field instructions, error messaging, and confirmation announcements” is much stronger.

Provide expected behavior, not just current UI

If you tell the agent what users should be able to do, findings become sharper. Include expectations such as:

  • focus should move into the modal on open
  • error summary should be announced after submit
  • loading completion should be exposed without forcing re-navigation

This helps the screen-reader-testing guide distinguish implementation bugs from harmless variation.

Include component inventory and custom widget details

Custom UI controls are where screen reader issues cluster. If your page uses:

  • custom select menus
  • tab systems
  • expandable sections
  • drag-and-drop alternatives
  • live-updating dashboards

mention them explicitly. The skill can then target higher-risk patterns instead of spending time on low-risk static content.

Ask for failure modes and edge states

Do not limit testing to the happy path. To improve the usefulness of screen-reader-testing usage, request checks for:

  • empty results
  • invalid input
  • session timeout warnings
  • disabled controls
  • async updates
  • route changes in single-page apps

These states often expose silent failures that standard demos miss.

Iterate after the first output

After the first pass, ask follow-up questions like:

  • “Which findings are most likely caused by incorrect accessible names?”
  • “Which issues are specific to VoiceOver versus cross-screen-reader?”
  • “What should we retest after fixing focus management?”
  • “Which findings block task completion versus just causing confusion?”

This turns a static audit into a prioritized remediation workflow.

Pair screen-reader-testing with evidence capture

For teams, the best improvement is documenting:

  • exact page URL or build
  • screen reader and browser version
  • navigation path
  • keystrokes or gestures used
  • observed announcement text

Even if the skill itself is text-only, asking for this structure makes the output much easier to verify and hand off.

Know the main limitation before relying on it

The biggest constraint is that the screen-reader-testing skill is guidance-heavy and repository-light. There are no bundled scripts, references, or automation helpers in this skill folder. Its value depends on how well you supply context and how rigorously you execute the manual test plan.

Upgrade your prompt from generic to audit-ready

A high-quality final prompt usually includes:

  • product and flow
  • target screen reader/browser pairs
  • priority components
  • states to test
  • output format
  • severity model

Example:

“Use the screen-reader-testing skill to perform a UX audit of our billing settings flow. Prioritize NVDA + Firefox and VoiceOver + Safari. Test heading navigation, landmark clarity, form labels, inline validation, success and error announcements, dialog focus trapping, and dynamic plan-price updates. Return a table with issue, severity, affected AT/browser, reproduction steps, observed behavior, expected behavior, and likely code-level cause.”

That is the level of specificity that makes the skill materially more useful than a generic accessibility request.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...