anti-distill is a rewriting skill that classifies and rewrites skill files, work guides, and persona docs into cleaned versions with core know-how removed. It supports manual install in Claude Code or OpenClaw, then runs with /anti-distill on files, folders, or pasted content.

Stars247
Favorites0
Comments0
AddedApr 3, 2026
CategoryRewriting
Install Command
npx skills add leilei926524-tech/anti-distill --skill anti-distill
Curation Score

This skill scores 68/100, which means it is listable for directory users as a real, reusable workflow, but with notable fit and ethics caveats. The repository gives a triggerable, step-based process, concrete prompt assets, examples, and install docs, so an agent can likely run it with less guesswork than a generic prompt. However, users should install it only if they specifically want document "dilution" rather than faithful skill authoring.

68/100
Strengths
  • Strong triggerability: SKILL.md includes explicit invocation phrases, argument hints, supported input modes, and allowed tools.
  • Good operational substance: classifier and diluter prompt files define concrete transformation rules for work, persona, and general documents.
  • Helpful install-decision evidence: README, INSTALL.md, and before/after examples show outputs, intensity levels, and supported formats.
Cautions
  • Workflow clarity is still partly manual: the skill asks the user to choose cleaning intensity and does not show a complete end-to-end output contract in the excerpt.
  • Narrow and potentially controversial use case: the repo is optimized for removing core know-how from employee skills, so it is not a general knowledge-cleanup tool.
Overview

Overview of anti-distill skill

What anti-distill does

The anti-distill skill is a document-rewriting skill for turning a detailed employee skill file, work guide, or persona doc into a “looks complete, but core knowledge removed” version. Its practical job is not general summarization; it classifies what is safe to keep, what should be diluted, what should be removed, and what should be masked, then rewrites accordingly.

Who should use anti-distill

anti-distill is best for people who have already written internal skill documentation and want a cleaner external-facing or review-safe version. It is especially relevant for Rewriting tasks involving SKILL.md, work.md, persona.md, SOPs, handoff docs, or mixed Markdown/TXT/PDF inputs. If you need faithful preservation of expert detail, this is the wrong tool.

What makes this anti-distill skill different

The main differentiator is that anti-distill has an explicit classification-and-rewrite workflow instead of relying on a vague “make this less specific” prompt. The repo includes separate prompt files for classification and for different document types: prompts/classifier.md, prompts/diluter_work.md, prompts/diluter_persona.md, and prompts/diluter_general.md. That structure reduces guesswork when rewriting work knowledge versus persona traits.

Key tradeoffs before you install

anti-distill intentionally lowers usefulness. Good output should stay structurally credible and technically non-wrong, but become less executable, less specific, and less transferable. That is the point. The risk is over-cleaning into obvious fluff or under-cleaning and leaving behind operational know-how, thresholds, escalation paths, or decision logic.

How to Use anti-distill skill

Install anti-distill in Claude Code or OpenClaw

The repo provides manual install instructions rather than a package installer. In Claude Code, clone it into project or global skills:

mkdir -p .claude/skills
git clone https://github.com/leilei926524-tech/anti-distill.git .claude/skills/anti-distill

Or globally:

git clone https://github.com/leilei926524-tech/anti-distill.git ~/.claude/skills/anti-distill

For OpenClaw:

git clone https://github.com/leilei926524-tech/anti-distill.git ~/.openclaw/workspace/skills/anti-distill

Then invoke with /anti-distill. No extra Python dependencies are required.

What input anti-distill needs

The anti-distill skill accepts:

  • a direct file path
  • a colleague-skill folder such as colleagues/{slug}/
  • pasted document content
  • local file search when you ask it to find skill files

It is designed to detect both colleague-skill patterns like work.md + persona.md or ## Layer 0, and generic docs like Markdown, TXT, or PDF. Best inputs are docs with concrete rules, thresholds, examples, decision branches, named people, incident memory, or “how we really do it” content, because that is where anti-distill adds the most value.

How to prompt anti-distill well

A weak request is: “clean this skill.”

A stronger anti-distill usage prompt is:

  • what file to read
  • what document type it is
  • how aggressive to be
  • what output files you want

Example:
/anti-distill Read colleagues/zhangsan/. This is a colleague-skill with work and persona content. Use medium cleaning. Keep structure and formatting, but remove concrete thresholds, troubleshooting memory, escalation shortcuts, and highly distinctive behavior cues. Generate a cleaned version plus a private backup of removed knowledge.

This works better because the repo’s logic centers on cleaning intensity and on identifying high-value knowledge classes such as pitfalls, judgment heuristics, human-network knowledge, and hidden workflows.

Best workflow and files to read first

If you want to understand anti-distill before trusting it, read in this order:

  1. README.md for the product intent and output model
  2. INSTALL.md for install paths
  3. SKILL.md for triggers, tool rules, and main flow
  4. prompts/classifier.md for the label system: [SAFE], [DILUTE], [REMOVE], [MASK]
  5. the relevant diluter prompt by document type
  6. examples/zhangsan_before_after.md to calibrate output quality

In practice, start with medium intensity, review what survived, then rerun only the weak sections. Heavy cleaning is faster but more likely to produce obvious corporate filler.

anti-distill skill FAQ

Is anti-distill better than a normal rewriting prompt?

Usually yes, if your goal is controlled degradation rather than simple paraphrasing. A generic prompt often keeps too many specifics or deletes too much structure. anti-distill is more reliable because it separates classification from rewriting and has document-specific rewrite rules.

Is anti-distill suitable for beginners?

Yes, if you are comfortable reviewing rewritten docs. The anti-distill guide is straightforward: install, invoke, choose intensity, review outputs. The harder part is judgment—knowing which surviving lines still reveal too much. Beginners should compare output against the included before/after example before using it on important files.

When should you not use anti-distill?

Do not use anti-distill for public documentation, onboarding, or any knowledge base where operational usefulness matters. It is also a poor fit for tiny docs with no real specifics; if the source is already generic, anti-distill has little to improve and may only make it weaker.

Does anti-distill fit multilingual and mixed repositories?

Yes. The skill explicitly supports English and Chinese and responds in the user’s language. It also works across mixed file formats and can read images or PDFs through the host tool’s native reading support, but the quality still depends on how clearly the original document exposes concrete know-how.

How to Improve anti-distill skill

Give anti-distill better source material

anti-distill performs best on rich, specific source text. If your input includes exact thresholds, incident lessons, real decision criteria, sample dialogue, or named coordination paths, the skill can make sharper keep/dilute/remove decisions. If the source is already vague, the output will not improve much because there is little meaningful signal to transform.

Watch the main failure modes

The biggest anti-distill failure modes are:

  • preserving actionable detail by accident
  • replacing too much with obvious nonsense
  • flattening voice so hard that the doc becomes suspicious
  • missing hidden knowledge embedded in examples or dialogue

Pay extra attention to numbers, “if X then Y” branches, named owners, and postmortem-style explanations. These often carry more value than section titles suggest.

Improve prompts with explicit rewrite constraints

For better anti-distill usage, tell it what to preserve and what to target. Good constraints include:

  • preserve headings, lists, and section order
  • keep terminology technically correct
  • remove concrete limits, internal names, and root-cause memory
  • generalize examples without changing topic coverage
  • keep output within similar length

Those constraints align with the repo’s own quality bars such as structure retention and roughly similar length.

Iterate after the first pass

Do not treat the first anti-distill run as final. Review the cleaned version and mark surviving lines that still teach real judgment. Then rerun targeted sections, for example: “Re-clean only CR 重点 and 经验知识库; these still reveal specific execution standards.” This section-by-section iteration usually produces better results than switching the whole document immediately from medium to heavy cleaning.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...