A

browser-testing-with-devtools

by addyosmani

browser-testing-with-devtools helps agents test and debug real browser behavior through Chrome DevTools MCP. Use it to inspect the DOM, capture console errors, analyze network requests, profile performance, and verify fixes in a live browser.

Stars18.7k
Favorites0
Comments0
AddedApr 21, 2026
CategoryTest Automation
Install Command
npx skills add addyosmani/agent-skills --skill browser-testing-with-devtools
Curation Score

This skill scores 82/100, which means it is a solid directory listing candidate: users get a clear trigger, concrete browser-debugging workflows, and enough operational detail to help an agent do better than a generic prompt when working on real browser issues via Chrome DevTools MCP.

82/100
Strengths
  • Strong triggerability: the description and "When to Use" section clearly scope it to browser-rendered apps, UI debugging, console/network analysis, performance checks, and verification in a live browser.
  • Good operational clarity: it includes Chrome DevTools MCP setup instructions and documents available tool capabilities, reducing guesswork about how the agent should inspect runtime behavior.
  • Meaningful agent leverage: the skill explicitly bridges static code analysis with live browser evidence, helping agents verify fixes, inspect DOM/runtime state, and test visual output instead of relying on assumptions.
Cautions
  • Adoption depends on an external prerequisite: users need Chrome DevTools MCP configured, and the repository provides no install command field or bundled support files beyond SKILL.md.
  • The skill appears documentation-only, with no scripts, references, or example assets, so some advanced scenarios may still require user interpretation rather than turnkey execution.
Overview

Overview of browser-testing-with-devtools skill

What browser-testing-with-devtools does

The browser-testing-with-devtools skill helps an agent test and debug real browser behavior through Chrome DevTools MCP instead of relying on static code reading alone. It is built for cases where the truth lives in runtime signals: rendered DOM, console errors, network traffic, layout shifts, screenshots, and performance metrics.

Who should install this skill

This browser-testing-with-devtools skill is best for frontend engineers, full-stack developers, QA engineers, and AI-assisted builders working on web apps, design systems, dashboards, auth flows, or any feature that must be validated in an actual browser. It is a poor fit for backend-only repos, CLI tools, or libraries with no browser runtime.

Why it is better than a generic prompt

A normal prompt can ask an agent to “check the UI,” but browser-testing-with-devtools for Test Automation gives the agent a concrete workflow anchored in Chrome DevTools MCP. The practical difference is less guesswork: the agent can verify what rendered, inspect failing selectors, read console output, review requests, and confirm whether a fix actually changed browser behavior.

Main adoption constraints

The main blocker is setup, not concept. You need a working Chrome DevTools MCP server available to the agent. This skill also assumes you can run the target app locally or access a test environment. If your workflow cannot expose a live browser session, the value of browser-testing-with-devtools drops sharply.

How to Use browser-testing-with-devtools skill

Install context and prerequisite setup

There is no separate package-specific browser-testing-with-devtools install command in the skill itself; the key requirement is configuring Chrome DevTools MCP. The skill’s setup example adds this to .mcp.json or Claude Code MCP settings:

{
  "mcpServers": {
    "chrome-devtools": {
      "command": "npx",
      "args": ["@anthropic/chrome-devtools-mcp@latest"]
    }
  }
}

Then ensure your app can run in a browser, start the app, and confirm the agent can access the MCP tools. Read skills/browser-testing-with-devtools/SKILL.md first; that is the only source file and contains the intended workflow.

What input the skill needs to work well

Good browser-testing-with-devtools usage starts with a concrete target, not “test my site.” Provide:

  • app URL or route
  • expected behavior
  • browser state assumptions such as logged-in/logged-out
  • device or viewport requirements
  • key user actions
  • what counts as success or failure

Stronger prompt:
“Use browser-testing-with-devtools to open http://localhost:3000/settings/billing, log in with the seeded test user if needed, click ‘Upgrade’, verify the modal appears, confirm no console errors, inspect failed network calls, and report whether the CTA is blocked by layout or JS.”

Turn a rough goal into an effective prompt

A rough goal like “debug checkout” is too broad. Convert it into a sequence the agent can execute:

  1. open the page
  2. reproduce the issue
  3. inspect DOM and console
  4. review network requests
  5. capture visual/performance evidence
  6. suggest or validate a fix

Useful prompt pattern:
“Use the browser-testing-with-devtools skill to reproduce [issue] on [URL]. Check [DOM element], [console errors], [network request], and [visual result]. If broken, identify likely cause and verify whether a proposed fix works in-browser.”

Practical workflow and high-value checks

Use this order for the best signal-to-effort ratio:

  1. Load the affected route and confirm the issue is reproducible.
  2. Check console errors before changing anything.
  3. Inspect the DOM for missing elements, wrong states, hidden overlays, or disabled controls.
  4. Review network requests for API failures, CORS, auth, or unexpected payloads.
  5. Capture screenshots or performance data only after reproduction is stable.
  6. Re-test after each fix to confirm the browser behavior changed, not just the code.

This workflow is where browser-testing-with-devtools guide value shows up: it helps close the loop between “I changed code” and “the browser actually behaves correctly.”

browser-testing-with-devtools skill FAQ

Is browser-testing-with-devtools good for all test automation?

No. browser-testing-with-devtools for Test Automation is strongest for exploratory validation, debugging, and agent-assisted browser checks. It is not a replacement for a full regression suite, CI orchestration, or broad cross-browser coverage on its own.

When should I use this instead of ordinary prompting?

Use browser-testing-with-devtools when the answer depends on runtime evidence. If you need to know what actually rendered, which request failed, or whether a fix removed a console error, this skill is much more reliable than asking an agent to infer behavior from source files alone.

Is it beginner-friendly?

Yes, if you already understand the user flow you want to test. The hard part is not the skill syntax; it is giving the agent a reproducible scenario. Beginners usually succeed faster when they specify one route, one issue, one expected outcome, and one environment.

When should I not install this skill?

Skip it if your work is backend-only, your environment cannot expose a browser to MCP, or you mainly need deterministic end-to-end suites in CI. In those cases, the browser-testing-with-devtools skill may be helpful occasionally, but it should not be your primary automation approach.

How to Improve browser-testing-with-devtools skill

Give richer reproduction details

The biggest quality jump comes from better inputs. Include route, state, credentials, feature flags, viewport, and exact symptoms. “Button broken” is weak. “On localhost:3000/cart, at 1280px width, clicking Place Order does nothing and no confirmation modal appears” is much better because the agent can verify each step.

Ask for evidence, not just conclusions

To improve browser-testing-with-devtools usage, ask the agent to return proof:

  • console errors copied verbatim
  • request URL and response status
  • relevant DOM selectors or states
  • screenshot notes
  • before/after verification after a fix

This reduces false confidence and makes handoff easier.

Avoid common failure modes

Most poor results come from one of four issues: the app was not running, the wrong route was tested, auth state was missing, or the prompt asked for too many flows at once. Keep each run focused on one user journey. If setup is flaky, ask the agent to confirm environment readiness before testing.

Iterate after the first run

The best browser-testing-with-devtools guide pattern is iterative: first reproduce, then narrow, then verify. After the first output, refine with prompts like:

  • “Re-test only the failing submit action.”
  • “Compare DOM state before and after click.”
  • “Ignore styling and focus on network/auth.”
  • “Validate the fix and confirm no new console errors.”

That loop is what makes browser-testing-with-devtools genuinely useful: it turns browser debugging from vague inspection into repeatable, evidence-based validation.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...