web-perf
by cloudflareweb-perf analyzes web performance with Chrome DevTools MCP. It measures Core Web Vitals, trace-based load issues, render-blocking resources, layout shifts, caching problems, and accessibility gaps. Use the web-perf skill for Performance Optimization, debugging slow pages, and evidence-based web-perf guide workflows that rely on current docs and live traces.
This skill scores 84/100, which means it is a solid directory listing for agents that need a concrete web performance workflow. The repository gives users enough evidence to decide on installation: it clearly scopes the skill to Chrome DevTools MCP-based performance analysis, names the metrics and failure modes it targets, and includes a required preflight check for MCP availability. Users should still expect to configure the DevTools MCP server and rely on some external documentation retrieval, but the install decision value is strong.
- Strong triggerability: the frontmatter explicitly says to use it for auditing, profiling, debugging, or optimizing page-load performance, Lighthouse scores, and site speed.
- Good operational clarity: it specifies a first-step MCP availability check and provides the exact chrome-devtools MCP config snippet if the tool is missing.
- Meaningful agent leverage: it targets Core Web Vitals plus render-blocking resources, dependency chains, layout shifts, and caching issues, which is more actionable than a generic performance prompt.
- Requires a working Chrome DevTools MCP setup; if the tool is unavailable, the skill instructs the agent to stop and ask for configuration changes.
- Some guidance is intentionally retrieval-driven rather than self-contained, so users will still depend on external docs for thresholds and tooling details.
Overview of web-perf skill
What web-perf does
The web-perf skill helps you audit and improve page speed using Chrome DevTools MCP instead of guessing from a screenshot or a Lighthouse score alone. It focuses on Core Web Vitals, trace-based diagnosis, network waterfalls, render blocking, layout shifts, caching, and related accessibility/performance issues.
Who should use it
Use this web-perf skill if you need a practical performance investigation for a real site, especially when you are trying to explain why a page feels slow, why a metric regressed, or which resource is hurting load. It is a strong fit for Performance Optimization work where evidence matters more than generic advice.
What makes it different
The main value of web-perf is its bias toward live retrieval from current docs and DevTools data, not stale mental models. That makes it better for decisions that depend on exact metric definitions, trace interpretation, or current tooling behavior. It is less useful for broad SEO audits or design critique that do not require performance tracing.
How to Use web-perf skill
Install and verify the environment
Run the web-perf install flow through your skills manager, then confirm the Chrome DevTools MCP server is actually available before asking for analysis. The skill expects DevTools access; if the MCP tools are missing, the workflow should stop early rather than invent results.
Give it a performance-shaped prompt
A strong web-perf usage prompt names the page, the symptom, and the context the agent needs to reproduce or inspect. For example: “Audit the home page on mobile for LCP regression after the latest release. Focus on blocking CSS, hero image delivery, and trace evidence.” That is better than “make this site faster” because it tells the skill what to measure and where to look.
Read the right files first
Start with SKILL.md, then follow any repo-linked docs that explain retrieval sources, tool checks, or analysis steps. In this repo there are no extra helper folders, so the main decision points live in the skill file itself. Pay special attention to the sections that tell the agent to verify MCP tools, prefer retrieval, and use trace evidence over assumptions.
Use a workflow that matches the question
For diagnosis, ask for a trace-backed root cause and a short fix list. For optimization, ask for the largest likely wins first, not an exhaustive checklist. For regression hunting, provide the before/after change, URL, device class, and whether you care more about LCP, INP, or CLS. The more concrete the input, the less the model has to guess which web-perf guide path to take.
web-perf skill FAQ
Is web-perf only for Lighthouse-style audits?
No. It is useful for Lighthouse-related work, but the stronger use case is trace-based debugging with DevTools MCP. That means it can help you understand why a metric changed, not just that it changed.
Do I need to know Chrome DevTools well?
Not necessarily. The skill is useful for beginners who can describe the problem clearly. You do not need to be a trace expert, but you do need enough context to tell the agent what page, device, and symptom matter.
When should I not use web-perf?
Do not use it when you want a generic frontend refactor, a visual design review, or an answer that does not depend on runtime evidence. If you cannot provide a page to inspect or you do not have DevTools MCP available, the skill will be blocked.
How is it better than a normal prompt?
A normal prompt usually stays high level. The web-perf skill is more actionable because it pushes toward current documentation, explicit tool checks, and measurable causes such as render blocking, dependency chains, layout shift sources, and caching behavior. That makes it stronger for Performance Optimization than a one-off chat instruction.
How to Improve web-perf skill
Provide traceable inputs, not vague goals
The best way to improve web-perf results is to give a URL, target device, test condition, and the metric you care about most. “Improve checkout speed” is weak. “Audit checkout on mid-tier Android over fast 4G for LCP and CLS after the new hero banner shipped” is much better.
Share the change window and suspected cause
If you know what changed, say so. Release notes, a recent CMS update, a new third-party script, or a redesigned hero often point the investigation in the right direction. That helps the skill test likely causes instead of scanning the whole page blindly.
Ask for evidence and a prioritized fix path
A useful web-perf output should separate confirmed causes from probable ones, then rank fixes by user impact and implementation effort. If the first answer is too broad, ask for the top two bottlenecks, the trace evidence behind them, and the smallest safe change to verify improvement.
Iterate with before/after measurements
Treat the first pass as diagnosis, not closure. After applying a fix, rerun the same web-perf workflow with the same page and conditions so the output can compare trace evidence and metrics consistently. That is the fastest way to turn a web-perf guide into a repeatable optimization loop.
