A

rules-distill

by affaan-m

rules-distill is a maintenance skill for Skill Authors and prompt library curators. It scans installed skills, distills repeated patterns into reusable rules, and helps you append, revise, or create rule files with less guesswork than a generic review prompt.

Stars156.2k
Favorites0
Comments0
AddedApr 15, 2026
CategorySkill Authoring
Install Command
npx skills add affaan-m/everything-claude-code --skill rules-distill
Curation Score

This skill scores 68/100, which means it is list-worthy but best presented with caveats. Directory users get a real, non-placeholder workflow for distilling cross-cutting rules from installed skills, with scripted inventory collection plus explicit phases for analysis and rule updates. The main limitation is that the repo gives enough structure to trigger the skill, but not enough end-to-end operational detail to make adoption fully turnkey.

68/100
Strengths
  • Clear use case: periodic rules maintenance by scanning skills and distilling repeated principles into rule files.
  • Good operational structure: documented phases for deterministic inventory, LLM cross-read analysis, and appending/revising/creating rules.
  • Helpful automation evidence: bundled scripts for scanning skills and rules, with JSON-oriented output and repo/file references.
Cautions
  • Operational details are incomplete in the excerpted workflow, so agents may still need some judgment to execute the batching and verdict steps.
  • No install command is provided in SKILL.md, which makes setup/discovery less immediate for directory users.
Overview

Overview of rules-distill skill

What rules-distill does

rules-distill is a maintenance skill for turning repeated patterns across installed skills into reusable rules. It is built for the moment when you notice the same guidance appearing in multiple places and want it consolidated into a cleaner rule set instead of left as scattered prompt debt.

Who should install it

This rules-distill skill fits Skill Authors, prompt library maintainers, and anyone curating a growing .claude/skills setup. It is most useful when you already have several skills installed and need a repeatable way to decide what should become a rule, what should be revised, and what should be added.

Why it stands out

The main differentiator is its split between deterministic collection and LLM judgment. rules-distill does the scanning first, then uses the model to cross-read the full context and produce a verdict. That makes it more install-worthy than a vague “review my skills” prompt because the workflow is explicitly designed to reduce missed coverage and ad hoc judgment.

When it is a good fit

Use rules-distill when your rules feel incomplete, after a skill stocktake, or on a periodic maintenance cycle. It is a better fit for rule governance than for one-off skill creation, and it is strongest when the source set is large enough that manual reading would be slow or inconsistent.

How to Use rules-distill skill

Install and locate the skill

Run the rules-distill install step with the repository’s skill loader, then treat the installed path as the working context for the skill. The canonical install command in the repo is:
npx skills add affaan-m/everything-claude-code --skill rules-distill

Start with the files that control behavior

For a practical rules-distill usage flow, read SKILL.md first, then inspect scripts/scan-skills.sh and scripts/scan-rules.sh. Those scripts show what the skill actually inventories and how it structures input, which matters more than the high-level description if you want reliable results.

Give it a real maintenance brief

A strong prompt for rules-distill for Skill Authoring should name the target scope, the change goal, and the constraint. For example: “Scan my installed skills, identify cross-cutting principles that appear in at least three skills, and propose rule additions only for patterns that would change future outputs.” That is better than “improve my rules” because it tells the skill what counts as a rule-worthy pattern.

Use the workflow the skill expects

The repo’s guidance is built around inventory first, then cross-reading. In practice, let the skill collect the skill list and rules index before asking for decisions. If you already know the output format you want, say so up front: append to an existing rule, revise outdated content, or create a new rule file. That reduces back-and-forth and helps the skill choose the right action instead of only summarizing findings.

rules-distill skill FAQ

Is rules-distill only for large repositories?

No. It becomes more valuable as the number of installed skills grows, but the rules-distill skill still helps smaller setups when you want a disciplined way to decide whether a pattern deserves a rule. If you only have one or two skills, a simpler prompt may be enough.

How is this different from a normal prompt?

A normal prompt can ask an LLM to “find patterns,” but rules-distill adds a repeatable collection phase plus script-backed inventory. That means less dependence on memory, less sampling bias, and fewer missed files. For users who care about consistency, that is the main reason to choose the skill.

Do beginners need to understand the scripts first?

Not fully, but they should know what the scripts collect and why. Beginners can use the skill by following the install and inventory steps, then reading the two scanner scripts for confidence. If you skip that context, you may ask for a rule change before you know whether the evidence is broad enough.

When should I not use rules-distill?

Do not use it for one-off prompt polishing, narrow code edits, or tasks that do not require rule governance. It is also a poor fit if your source material is too small to support cross-cutting patterns. In those cases, the rules-distill install adds process without enough payoff.

How to Improve rules-distill skill

Feed it better evidence

The strongest inputs name the skills, the problem pattern, and the threshold for action. Instead of “find useful rules,” try “find repeated conventions in onboarding, safety, and formatting skills, but only promote patterns that recur across multiple sources and affect output quality.” That gives rules-distill a concrete standard for inclusion.

Ask for the right kind of change

The skill is most useful when you specify whether the output should append, revise, or create. That choice matters because a repeated pattern is not always a new rule; sometimes it belongs as a correction to an existing one. Stating the action up front improves the rule-writing outcome more than asking for a longer analysis.

Watch for the common failure mode

The main failure mode is over-generalizing from a weak signal. If you want better rules-distill usage, require the model to cite repeated evidence before recommending a rule. This keeps the skill focused on cross-cutting principles rather than isolated preferences or stylistic quirks.

Iterate after the first pass

Use the first distillation to identify gaps, then rerun with narrower questions: “Which rule is duplicated?”, “Which rule is outdated?”, or “Which recurring behavior is still missing?” That feedback loop is the fastest way to make rules-distill for Skill Authoring produce sharper, more maintainable rule files over time.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...