A

content-quality-auditor

by aaron-he-zhu

content-quality-auditor is a publish-readiness skill for SEO content, landing pages, and long-form drafts. It runs an 80-item CORE-EEAT audit with weighted scoring, veto checks, and a prioritized fix plan to help editors judge if content is ready to publish.

Stars0
Favorites0
Comments0
AddedMar 31, 2026
CategorySEO Content
Install Command
npx skills add aaron-he-zhu/seo-geo-claude-skills --skill content-quality-auditor
Curation Score

This skill scores 78/100, which means it is a solid directory listing candidate: agents get strong trigger cues and a substantive, reusable audit framework, and users can make a credible install decision from the repository, though execution still depends on reading a long prompt-only spec rather than using packaged tooling.

78/100
Strengths
  • Frontmatter provides many explicit multilingual triggers plus a concrete job-to-be-done: publish-readiness grading with an 80-item CORE-EEAT audit, weighted scoring, veto checks, and fix planning.
  • Repository evidence shows real workflow substance rather than a placeholder: a long SKILL.md with multiple workflow/constraint/practical signals and a supporting item reference covering all 80 audit items.
  • Allowed-tools and compatibility are clearly declared, with no required system packages and a scoped tool need (WebFetch), which helps agents and installers understand the operational footprint quickly.
Cautions
  • The skill appears documentation-driven only: no scripts, rules, metadata helpers, or install command, so agents must rely on a lengthy manual spec and may have more implementation variance.
  • Only one support reference file is present, and the benchmark details are partly delegated to another linked file, which reduces progressive disclosure and makes fast adoption slightly harder.
Overview

Overview of content-quality-auditor skill

What content-quality-auditor actually does

The content-quality-auditor skill is a publish-readiness checker for long-form content, landing pages, and SEO articles. Instead of giving a vague “this looks good” opinion, it runs a structured CORE-EEAT audit across 80 items, applies weighted scoring, flags veto issues, and produces a fix plan. For teams publishing content at scale, that is the real value: a repeatable gate before content goes live.

Who should use this skill

This skill is best for SEO content leads, editors, agency reviewers, and writers who need a consistent quality bar. It is especially useful if you already have drafts and want to answer questions like “Is this ready to publish?”, “What is dragging the score down?”, or “What should I fix first?”

Best fit job-to-be-done

Use content-quality-auditor when the main job is evaluation, not ideation. It is built to grade and diagnose existing content for quality, usefulness, structure, evidence, and E-E-A-T style signals. If your problem is “write me an article from scratch,” this is not the first skill to reach for.

What makes it different from a normal prompt

A generic prompt usually produces broad editorial feedback. The content-quality-auditor skill is more operational:

  • it uses a defined multi-item audit model
  • it separates score, blocking issues, and remediation
  • it gives you a sharper publish / not-yet-publish decision
  • it includes a reference file for the audit items, which reduces guesswork

That structure matters when multiple people need to review content the same way.

Main adoption considerations

The biggest adoption question is not installation complexity; it is input quality. This skill is only as good as the draft, query, audience, and business context you provide. If you paste a bare article without target keyword, intended reader, or desired outcome, the audit will still run, but the recommendations will be less specific and less actionable.

How to Use content-quality-auditor skill

Install context and compatibility

Repository metadata indicates broad skill-ecosystem compatibility, including Claude Code ≥1.0, skills.sh marketplace, ClawHub, and the Vercel Labs skills ecosystem. The skill requires no system packages. Allowed tools list WebFetch, and optional MCP network access can help if your broader workflow enriches audits with external SEO data.

If you install from the repository, the usual pattern is:

npx skills add aaron-he-zhu/seo-geo-claude-skills --skill content-quality-auditor

If your environment uses a different skill loader, use the repository path cross-cutting/content-quality-auditor.

Read these files first

To understand how content-quality-auditor install and usage will feel in practice, start with:

  • cross-cutting/content-quality-auditor/SKILL.md
  • cross-cutting/content-quality-auditor/references/item-reference.md

SKILL.md explains when the skill should trigger and what the audit is trying to decide. references/item-reference.md is the high-value companion file because it exposes the 80 audit item names, which helps you interpret scores and improve prompting.

What input the skill needs

For best results, give the skill more than just the article body. A strong input package usually includes:

  • the full draft
  • target query or keyword set
  • page type: blog post, comparison page, service page, affiliate page, guide
  • target audience and search intent
  • business goal: rank, convert, educate, support
  • known constraints: legal review, brand tone, no original research, no first-hand experience
  • optional competitors or benchmark URLs

This lets content-quality-auditor usage move from generic critique to publish-decision support.

Turn a rough request into a strong prompt

Weak prompt:

  • “Grade my article.”

Stronger prompt:

  • “Run the content-quality-auditor skill on this draft for the keyword ‘best payroll software for small business’. Audience is US small business owners comparing tools. Goal is publish-readiness for SEO and trust. Please give me the overall score, any veto issues, top 10 gaps by impact, and a prioritized fix plan.”

Why this works:

  • it defines the query
  • it defines the reader
  • it defines the page goal
  • it asks for decision-oriented output, not just commentary

Example prompt for SEO content review

Use a format like this:

  • “Use content-quality-auditor for SEO Content on the draft below.
  • Primary keyword: project management software for agencies
  • Search intent: commercial investigation
  • Audience: agency founders with 5–50 employees
  • Must-have outcome: clear recommendation and comparison depth
  • Constraints: no fabricated experience, no unsupported stats
  • Output needed: weighted score, veto checks, section-by-section weaknesses, and the 5 highest-leverage edits before publish.”

This improves scoring relevance because the skill can judge coverage, intent match, and evidence expectations more accurately.

A practical workflow is:

  1. Run content-quality-auditor on the current draft.
  2. Review the veto issues first.
  3. Group weak items into content, structure, evidence, and trust fixes.
  4. Revise the article.
  5. Re-run the skill to see whether the score improved and whether blockers are cleared.

This is better than trying to fix every comment at once. The skill is most useful as an iterative editorial gate.

How to interpret the 80-item model

The reference file shows the audit spans multiple dimensions, including:

  • content fundamentals such as intent alignment and query coverage
  • on-page structure such as heading hierarchy and chunking
  • reliability and evidence signals such as citation density and source hierarchy
  • experience, expertise, and authority-style signals

That breadth is why the skill is stronger than a plain “review my article” prompt. It is checking whether a page is useful, navigable, credible, and convincing enough to publish.

What the veto checks are good for

The most practical feature is the idea of veto checks. A draft can feel polished and still fail on a core blocker such as weak evidence, shallow coverage, missing direct answer, or trust gaps. In editorial operations, these blockers matter more than an attractive average score because they often explain why content underperforms after publication.

Practical tips that improve output quality

To get better content-quality-auditor guide results:

  • paste the whole draft, not a summary
  • include the exact headline and meta intent if known
  • say whether first-hand experience is real or unavailable
  • ask for examples of fixes, not just issue labels
  • request prioritization by ranking impact or publish risk

Without these details, the audit can still be useful, but less tailored to your actual content constraints.

When to use WebFetch or external context

If the article cites external sources, discusses product specs, or competes in a crowded SERP, external fetching can improve judgment. Use it selectively. The goal is not to bloat the review with research, but to validate claims, compare expected query coverage, or assess whether the draft is thin relative to the topic.

content-quality-auditor skill FAQ

Is content-quality-auditor good for beginners

Yes, if you already have a draft. The structure is beginner-friendly because it turns “make this better” into a checklist-driven review. The catch is that beginners may need help interpreting some E-E-A-T or evidence-related findings, especially when the content lacks original experience.

Is this only for SEO articles

No, but SEO content is the clearest fit. The content-quality-auditor skill works best on pages where usefulness, credibility, and publish-readiness matter. It is less valuable for fiction, casual social posts, or purely creative writing where the scoring model is not the main success criterion.

How is it different from asking an LLM to review content

A normal review prompt can be smart but inconsistent. content-quality-auditor gives you a more disciplined framework with named audit items, weighted scoring, and veto logic. That makes it more suitable for repeatable editorial review and team workflows.

When should I not use content-quality-auditor

Skip it when:

  • you need first-draft generation, not evaluation
  • the content is too short to audit meaningfully
  • your success metric is brand voice alone
  • the page type depends on product truth the model cannot verify

In those cases, the skill may still give useful comments, but not enough to justify a formal audit pass.

Does it replace human editorial judgment

No. It is best used as a structured second reader. Human editors still decide brand fit, factual risk, legal sensitivity, and whether recommendations are realistic given deadlines and source availability.

How to Improve content-quality-auditor skill

Give the skill richer publishing context

The fastest way to improve content-quality-auditor results is to supply the missing context editors usually hold in their heads:

  • who the page is for
  • what query it targets
  • what “good enough to publish” means
  • what claims need stronger proof
  • what cannot be changed due to brand or compliance rules

This reduces generic recommendations and improves the usefulness of the fix plan.

Ask for prioritized fixes, not a wall of feedback

A common failure mode is getting an overwhelming audit output. Avoid that by asking the skill to rank issues:

  • by publish risk
  • by SEO impact
  • by credibility impact
  • by effort vs reward

That turns the first pass into an actionable editing queue.

Request evidence-aware feedback

Because the audit model includes reliability and trust signals, tell the skill what evidence is available. For example:

  • “This draft has no original testing.”
  • “We have expert quotes but no proprietary data.”
  • “We can add citations but not screenshots.”

This helps the skill recommend realistic improvements instead of impossible ones.

Use the item reference to target weak areas

After the first run, inspect references/item-reference.md and identify the low-performing cluster. If the article is weak on items like direct answer, query coverage, citation density, or reasoning transparency, prompt the next pass specifically around those areas. That is a better iteration loop than asking for a full rewrite.

Re-run after revision with a comparison request

A high-value pattern is to run content-quality-auditor twice:

  1. baseline audit
  2. post-edit audit

Then ask:

  • what score changed
  • which veto items cleared
  • which high-impact weaknesses remain
  • whether the draft is now ready to publish

This makes the skill useful not just for critique, but for measurable editorial progress.

Watch for common misreads

The skill can only judge what is present in the draft and context you provide. Expect weaker results if:

  • the article has hidden author credentials not included in the prompt
  • citations exist in CMS fields but not in the pasted content
  • the draft is outline-only
  • the target intent is ambiguous

Most “bad audit” complaints come from incomplete inputs, not from the framework itself.

Improve prompts with explicit output format

If you want consistent review across multiple pages, specify a response template such as:

  • overall weighted score
  • veto checks
  • top strengths
  • top weaknesses
  • section-by-section fixes
  • publish / not yet decision

This makes content-quality-auditor usage more reliable for batch editorial workflows and easier to compare across drafts.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...