audit-website
by squirrelscanThe audit-website skill uses the squirrel CLI to audit websites and webapps across 230+ rules for SEO, technical, content, performance, security, links, and site health, then returns actionable LLM-ready reports.
This skill scores 82/100, meaning it is a solid directory listing candidate: agents get a clearly scoped website-audit workflow with meaningful leverage over a generic prompt, and directory users have enough evidence to judge fit, though setup and execution depend on an external CLI being preinstalled.
- Strong triggerability: SKILL.md explicitly scopes the job to auditing websites/webapps across 230+ rules and 21 categories using the `squirrel` CLI.
- Good agent leverage: the repo includes an LLM-oriented output reference (`references/OUTPUT-FORMAT.md`) and the skill promises structured reports, health scores, issue lists, and recommendations.
- Substantial workflow documentation: long SKILL.md with multiple workflow/constraint signals, code fences, and doc links reduces guesswork compared with prompting an agent from scratch.
- Adoption friction: SKILL.md requires the `squirrel` CLI in PATH but does not provide an install command in the skill itself.
- Trust depends partly on external docs: rule details and deeper references are pushed to squirrelscan.com/docs rather than fully documented inside the skill repo.
Overview of audit-website skill
What the audit-website skill does
The audit-website skill helps an AI agent run a real website audit through the squirrel CLI, then turn the results into an actionable report. Instead of relying on a generic prompt to "review my site," it uses a scanner built for websites and webapps, covering 230+ rules across SEO, technical, content, performance, security, links, and related categories.
Who should install audit-website
This skill is best for developers, technical SEOs, growth teams, and product owners who need a structured site health check, not just ad hoc advice. It is especially useful when you want audit-website for UX Audit adjacent work as well, because technical and content findings often explain UX friction, broken journeys, and trust issues even if the primary scanner is not a dedicated usability testing tool.
The real job to be done
Most users do not need "a website audit" in the abstract. They need to answer practical questions such as:
- Why is this site underperforming in search?
- What broke after a release?
- Which issues are worth fixing first?
- Is this page set crawlable, indexable, and internally linked?
- Are there obvious content, metadata, or trust problems?
- Did we accidentally expose secrets or broken links?
The audit-website skill is valuable when you want those answers from a repeatable scan instead of a one-off model opinion.
What makes audit-website different from a plain prompt
Its main differentiator is tool-backed evidence. The skill is designed around squirrelscan, which crawls and analyzes a live site using explicit rules. The output can be emitted in llm format, which is compact and structured for agents. That gives better grounding than asking a model to inspect a few pasted URLs and guess.
Adoption blockers to know first
Before you install audit-website, check the main constraint: the skill requires the squirrel CLI to be installed and available in PATH. If your environment cannot run shell tools, cannot access the target site, or blocks crawling, this skill will not deliver its full value.
How to Use audit-website skill
Install context for audit-website
This skill lives in the squirrelscan/skills repository under audit-website. In a skills-enabled environment, install it with:
npx skills add https://github.com/squirrelscan/skills --skill audit-website
Then make sure the runtime can execute squirrel. The skill frontmatter explicitly requires squirrel CLI installed and accessible in PATH.
Prerequisites that decide success
A good audit-website install is less about adding the skill file and more about confirming execution conditions:
squirrelis installed and callable from the shell- the target URL is reachable from your machine or agent runtime
- robots, auth, IP restrictions, or staging protections will not block the crawl
- you know whether you want a broad site audit or a targeted page/path audit
If any of those fail, the model may still talk about the site, but you will not be using the skill as intended.
What to read first in the repository
For fast onboarding, read these files in this order:
audit-website/SKILL.mdREADME.mdat repo rootreferences/OUTPUT-FORMAT.mdagents/openai.yaml
Why this order:
SKILL.mdexplains scope, prerequisites, and expected workflow.README.mdclarifies ecosystem features like output formats and diff reports.references/OUTPUT-FORMAT.mdmatters if you want the best agent-readable output.agents/openai.yamlconfirms how the skill is exposed in agent UIs.
Inputs audit-website needs from you
The minimum useful input is a target URL. Better inputs produce better audits. Provide:
- exact URL or environment: production, staging, preview
- audit goal: SEO triage, release regression check, content cleanup, security pass
- scope: full site, path, template type, or page set
- constraints: login required, rate sensitivity, blocked paths, time budget
- output preference: summary for executives or fix list for implementers
Without that context, the scan can still run, but the recommendations will be less prioritized.
Turning a rough goal into a strong audit-website prompt
Weak prompt:
Use audit-website on our site and tell me what is wrong.
Stronger prompt:
Use audit-website to audit https://example.com for pre-launch SEO and technical issues. Prioritize problems that affect indexing, metadata quality, internal linking, broken pages, and obvious trust or security issues. Return the top 15 fixes ranked by impact and effort, and separate sitewide issues from page-specific issues.
Even stronger for a UX-adjacent review:
Use audit-website on https://example.com/pricing and the surrounding conversion path. Focus on broken links, content clarity signals, metadata, page structure, trust indicators, performance-related friction, and technical issues that could hurt user flow. Summarize findings as a UX-aware remediation list, but keep recommendations grounded in the scan evidence.
Recommended audit-website workflow
A practical audit-website usage flow is:
- Run an initial broad audit.
- Review overall score, category scores, summary counts, and high-severity failures.
- Group findings into:
- indexation/crawl problems
- content and metadata issues
- link and architecture issues
- performance/security/trust issues
- Ask the model to prioritize by business impact.
- Re-run after fixes or compare outputs over time.
This is better than jumping straight into individual warnings, because many low-level findings are symptoms of a smaller number of systemic problems.
Why the llm format matters
The repository includes references/OUTPUT-FORMAT.md, which is one of the strongest signals in this skill. The --format llm output is compact and structured for model consumption, with fields for site info, scores, summary counts, and issue groupings. For agent workflows, this usually beats verbose raw terminal output because it reduces token waste while preserving machine-readable structure.
What audit-website is good at spotting
Based on the repository signals, this skill is well suited for finding:
- SEO metadata and canonical issues
- crawlability and technical SEO problems
- broken links and structural link issues
- content quality gaps
- performance-related issues
- security findings, including leaked secret patterns
- broad site health regressions over time
That makes it a strong fit for release QA, SEO maintenance, technical due diligence, and cleanup planning.
What audit-website is not the best fit for
Do not treat audit-website as a substitute for:
- moderated usability testing
- analytics interpretation
- heatmaps or session replay
- visual design critique
- deep application security testing
- authenticated app flows the crawler cannot access
For audit-website for UX Audit, think of it as evidence for friction and trust issues around structure, content, speed, and broken journeys, not a full UX research stack.
Practical prompt patterns that improve output quality
Ask for output shaped to the decision you need. Examples:
Rank issues by revenue risk for a lead-gen site.Separate quick wins from engineering-heavy fixes.Map each issue to likely user impact and search impact.Group findings by template so we can fix them at scale.Highlight anything that could have been introduced in the last release.
These prompts matter because raw audits often contain more findings than a team can act on at once.
Commands and outputs to ask for explicitly
If your agent can control the scan, request outputs that are easiest to reuse:
llmformat for model analysisjsonif you want downstream scriptingmarkdownorhtmlfor stakeholder sharing- diff-style comparisons when checking regressions between audits
The upstream repo emphasizes multiple output formats and regression-friendly workflows, so format choice is part of using the skill well, not an afterthought.
audit-website skill FAQ
Is audit-website worth using if I can just prompt an LLM?
Yes, if you want grounded findings. A plain prompt can suggest common best practices, but audit-website can inspect a live site with explicit rules and return concrete failures, counts, scores, and affected pages. That is the main reason to install it.
Is audit-website beginner-friendly?
Mostly yes, if you are comfortable with a CLI-backed workflow. Beginners can still get value by giving the agent a URL and a goal, then asking for a prioritized action plan. The harder part is environment setup, not understanding the report.
Can audit-website be used for webapps, not just marketing sites?
Yes. The skill description explicitly mentions websites or webapps. The practical limit is crawlability. If key flows sit behind auth, complex state, or blocked environments, coverage may be partial.
Is audit-website only for SEO?
No. SEO is a major use case, but the skill also covers technical, content, performance, security, and link-related issues. That breadth is why the audit-website guide is useful for release checks and general site health, not only rankings work.
Is audit-website good for UX Audit work?
Partially. audit-website for UX Audit is useful when UX problems are tied to content hierarchy, page structure, broken paths, trust signals, performance, or discoverability. It is not a replacement for user interviews or task-based testing.
When should I not install audit-website?
Skip it if:
- you cannot run
squirrel - your environment has no shell access
- your target site cannot be crawled
- you only want subjective copy or design feedback
- you need deep manual accessibility or penetration testing beyond scanner scope
Does the repository include output guidance?
Yes. references/OUTPUT-FORMAT.md explains the LLM-oriented format in enough detail to help you decide how to feed results back into an agent workflow.
How to Improve audit-website skill
Start audit-website with a narrower question
The fastest way to improve audit-website results is to avoid overly broad requests. Instead of "audit my whole site," ask for a launch check, a traffic-drop investigation, a blog template review, or a conversion-path pass. Narrow goals produce sharper prioritization.
Give page and business context, not just a URL
Strong inputs look like this:
This is a SaaS pricing page with a free-trial goal.This subfolder lost organic traffic after a migration.This is a staging environment for a redesign.These pages matter most: /, /pricing, /product, /blog.
That context helps the model distinguish critical issues from background noise.
Ask for ranking by impact and effort
A common failure mode is receiving a long undifferentiated issue list. Fix that by asking the agent to classify findings into:
- high impact / low effort
- high impact / high effort
- low impact / low effort
- monitor later
That turns audit-website usage into an implementation plan.
Use audit-website outputs to separate systemic vs isolated issues
After the first run, ask:
Which findings are template-level or sitewide, and which are isolated to a few pages?
This is one of the highest-value follow-up steps because systemic fixes usually outperform page-by-page cleanup.
Improve audit-website for UX Audit by adding user-flow framing
If your goal is UX-adjacent, say which flow matters:
- homepage to signup
- blog post to demo request
- pricing to checkout
- docs search to product activation
Then ask the agent to interpret technical findings in terms of friction, trust, and drop-off risk. That makes audit-website for UX Audit materially more useful without pretending the scanner did full user research.
Watch for false expectations around scan coverage
Another common mistake is assuming the tool saw everything. If the crawl was blocked, shallow, or limited to public pages, the report may miss authenticated or dynamic experiences. Ask the agent to state coverage limits explicitly before you act on the findings.
Re-run after fixes and compare deltas
The repo signals support for diff-oriented workflows. Use that. A single audit gives you a snapshot; repeated audits tell you whether health improved, regressed, or shifted categories. This is especially useful after migrations, template updates, and performance work.
Use rule docs when a finding is unclear
The skill points to rule documentation with this pattern:
https://docs.squirrelscan.com/rules/{rule_category}/{rule_id}
When a warning is ambiguous, checking the rule reference is often faster than debating the model's interpretation.
Ask for implementation-ready remediation
If the first pass is too abstract, follow up with:
Show exact pages or patterns affected.Give fix recommendations in developer-ready language.Draft tickets grouped by team: content, engineering, SEO.Highlight what should be validated in the next crawl.
This improves output quality more than asking the model to simply "be more specific."
Improve trust by requesting evidence in every recommendation
For each proposed fix, ask the agent to include:
- the issue category
- the affected page or scope
- why it matters
- expected outcome after fixing
That keeps the audit-website skill grounded in scan evidence rather than drifting into generic advice.
