site-architecture
by alinaqisite-architecture helps you design and audit technical SEO foundations for discoverability. Use this site-architecture skill to plan robots.txt, sitemaps, meta tags, crawl access, social previews, and Core Web Vitals for SEO Content and AI crawlers.
This skill scores 71/100, which means it is a legitimate listing candidate for users who want technical SEO/site-architecture guidance, but they should expect some adoption friction because the repo is strong on content and weaker on operational packaging. It is specific enough to help agents act with less guesswork than a generic prompt, yet the directory listing should note that it lacks install-time scaffolding and supporting files.
- Clearly scoped to technical SEO/site architecture, including robots.txt, sitemap, meta tags, Core Web Vitals, and AI crawler discovery.
- Substantial SKILL.md content with many headings, code fences, and repo/file references, which suggests real workflow guidance rather than a placeholder.
- Frontmatter includes when-to-use, paths, effort, and user-invocable fields, helping an agent or user identify trigger conditions quickly.
- No install command and no support files/scripts/resources, so adoption may require manual interpretation instead of a plug-and-play setup.
- The file includes placeholder markers and limited explicit workflow/constraint signaling, which reduces confidence in edge-case execution and trigger precision.
Overview of site-architecture skill
What site-architecture is for
The site-architecture skill helps you design or audit the technical foundation that search engines and AI crawlers can actually discover and understand. It focuses on practical site structure choices like robots.txt, sitemaps, meta tags, crawl access, and Core Web Vitals, so it is useful when visibility depends on more than content quality alone.
Best-fit users and jobs
Use the site-architecture skill if you are shipping a new site, fixing indexing issues, or preparing a content-heavy property for SEO Content discovery. It is especially helpful for developers, SEO leads, and editors who need a clear technical plan instead of generic SEO advice.
What makes it different
This skill is not just a prompt about “SEO.” It emphasizes discoverability for both traditional search engines and AI crawlers such as GPTBot, ClaudeBot, and PerplexityBot, plus social previews through Open Graph and Twitter Cards. The main value is turning architecture decisions into crawlable, indexable pages with fewer hidden blockers.
When it is a good or bad fit
It is a strong fit when the problem is site-wide structure, crawl access, or metadata consistency. It is a weaker fit if you only need a one-off meta description, a content brief, or design feedback with no technical publishing layer.
How to Use site-architecture skill
Install and open the right files
For site-architecture install, use the skill in the same workspace as the site you want to audit or configure, then open SKILL.md first. The repo is intentionally lightweight, so the highest-value guidance lives in that file. There are no companion scripts or helper folders to rely on, which means the skill depends on reading the rules carefully and applying them to your own stack.
What to feed the skill
The site-architecture usage pattern works best when you provide the site type, platform, and current constraint. A strong input looks like: “Audit our marketing site on Next.js for crawlability, sitemap coverage, and AI bot access; we want indexation for public pages only.” That gives the skill enough context to produce useful technical recommendations instead of broad best practices.
How to prompt for better output
Use the site-architecture guide as a workflow, not a checklist. Ask for one of these outcomes:
- a
robots.txtdraft for a specific environment - a sitemap strategy for public vs private URLs
- metadata rules for templates and page types
- a crawl/indexation review for a current site structure
If you need site-architecture for SEO Content, say which content pages matter most, how they are generated, and which pages should stay out of search. That helps the skill balance visibility with exclusion, which is the real architectural tradeoff.
Files to read first
Start with:
skills/site-architecture/SKILL.md
Then inspect any referenced paths in your own repo that match:
robots.txtsitemap.xmlor sitemap generators- HTML templates
- public asset directories
Because the skill repo itself has no extra support files, your implementation quality depends on mapping its rules onto the actual site structure.
site-architecture skill FAQ
Is site-architecture only for developers?
No. The site-architecture skill is useful for SEOs, content teams, and site owners too, but developers usually implement the final changes. Non-developers get value by defining the crawl and indexing policy before code changes happen.
How is this different from a normal prompt?
A normal prompt often asks for “SEO best practices” and gets generic output. site-architecture is narrower: it centers crawl access, discovery, metadata, and technical constraints. That makes it better when you need decisions that affect indexing, not just copy suggestions.
Is it beginner-friendly?
Yes, if you can describe your site clearly. The skill is beginner-friendly in the sense that it gives structure to a technical problem, but you still need basic facts like your platform, public/private URL rules, and whether the site should be indexed.
When should I not use it?
Do not use site-architecture when the task is purely creative, such as writing homepage copy or naming a product. It is also not the right tool if the site has no crawl or indexation concern, because the architectural guidance would be unnecessary overhead.
How to Improve site-architecture skill
Provide the constraints that matter
The best outputs come from inputs that name the platform, hosting model, and indexing goals. Say whether you are on static hosting, SSR, CMS-driven publishing, or a hybrid setup, and whether any sections must be blocked from crawlers. That reduces guesswork and makes the recommendations implementable.
Share page types, not just a homepage
To improve site-architecture results, list the major URL classes: product pages, blog posts, category pages, docs, login pages, admin areas, and filtered search pages. The skill can then separate indexable templates from URLs that should remain excluded, which is often the difference between a clean architecture and accidental crawl waste.
Watch for common failure modes
The most common misses are blocking useful pages in robots.txt, exposing duplicate or low-value URLs, and publishing metadata inconsistently across templates. If your first output feels generic, the issue is usually weak input: missing URL examples, unclear content priorities, or no mention of technical stack limitations.
Iterate with evidence
After the first pass, test the recommendations against live files and logs: confirm sitemap coverage, check robots.txt, and review rendered metadata on a few representative pages. Then ask the skill to refine only the broken parts, such as “adjust for faceted URLs” or “tighten bot access for staging vs production.” That keeps the site-architecture skill focused and produces better technical decisions on the second pass.
