reducing-entropy
by softaworksThe reducing-entropy skill is a manual-only refactoring aid for shrinking codebases. Read SKILL.md and at least one reference mindset first, then use it deliberately to favor deletion, simpler end states, and less total code.
This skill scores 71/100, which means it is acceptable to list for directory users who want a clear philosophical framework for aggressive code simplification, but they should expect a lightweight manual process rather than a tightly operational workflow.
- Very clear install fit: it explicitly targets codebase simplification, deletion, and minimizing final code size.
- Strong trigger constraints reduce misuse: the skill repeatedly says not to auto-apply and to use it only when explicitly requested.
- Reference mindsets add reusable decision principles like simplicity vs easy and data over abstractions, giving agents better calibration than a generic 'simplify this' prompt.
- Manual-only gating narrows triggerability because it says to activate only on explicit user request.
- The guidance is philosophy-heavy and has no concrete step-by-step example, line-count method, or executable support files.
Overview of reducing-entropy skill
The reducing-entropy skill is a manual-only refactoring aid for shrinking a codebase on purpose. It is not a general cleanup prompt and it should not trigger automatically. Use it when the real goal is fewer moving parts, less total code, and more deletion bias than a normal refactor would produce.
Who this skill is best for
reducing-entropy is best for people who are:
- refactoring mature code that keeps growing
- reviewing “cleanup” plans that may secretly add abstractions
- deciding whether a feature, layer, or helper should exist at all
- trying to simplify architecture, not just reorganize it
It is especially relevant for reducing-entropy for Refactoring, where the wrong instinct is often to introduce new structure instead of removing existing structure.
The real job-to-be-done
This skill helps answer a harder question than “what change should I make?” It asks:
- what should the codebase look like after the change
- whether the final state is actually smaller
- what can be deleted instead of preserved
That makes it more useful than a generic “simplify this code” prompt when you want a strong bias toward net reduction.
What makes reducing-entropy different
The main differentiator is its success metric: final code amount, not implementation effort.
That means it will favor outcomes like:
- writing a small migration that removes a large subsystem
- replacing custom types with simpler data structures
- deleting optional behaviors instead of generalizing them
- rejecting “cleaner” designs that increase total code
Important adoption constraint
This is not a safe default for every task. The repository explicitly frames reducing-entropy as manual-only and meant for explicit user intent. If your team values extensibility, future-proofing, or interface stability above code reduction in a given task, this skill may push too hard toward deletion.
What to read before deciding to use it
Read these files first:
skills/reducing-entropy/SKILL.mdskills/reducing-entropy/README.mdskills/reducing-entropy/references/simplicity-vs-easy.md
Then sample one or two additional reference mindsets depending on your situation:
references/data-over-abstractions.mdreferences/design-is-taking-apart.mdreferences/expensive-to-add-later.md
Those references matter because the skill expects you to ground decisions in at least one simplification mindset before acting.
How to Use reducing-entropy skill
reducing-entropy install and setup
If you use the Skills CLI pattern from this repository, install the skill with:
npx skills add softaworks/agent-toolkit --skill reducing-entropy
Then open the installed skill folder and read SKILL.md before first use. This is not a plug-and-play automation skill; it is a decision framework you invoke deliberately.
Start with the mandatory reference mindset
A practical detail many users miss: reducing-entropy tells you to load at least one file from references/ before proceeding. Do that first, and state which one you are using.
Good pairings:
- use
simplicity-vs-easy.mdwhen a familiar pattern is tempting but heavy - use
data-over-abstractions.mdwhen the code has wrappers, managers, or custom types everywhere - use
design-is-taking-apart.mdwhen responsibilities are tangled - use
expensive-to-add-later.mdwhen deletion may conflict with future retrofit cost
This step improves output quality because it gives the model a concrete simplification lens instead of a vague “make it cleaner” goal.
What input reducing-entropy needs
For useful output, provide more than a repo link or a file dump. The skill works best when you give:
- the user-approved goal
- the current behavior that must remain
- the part of the codebase in scope
- constraints on API, migrations, or deadlines
- whether deletion is allowed across files, modules, or features
Strong input example:
“Use reducing-entropy on our billing retry flow. Goal: preserve current retry behavior for Stripe failures, but reduce total code in services/billing/ and workers/retry/. You may remove dead configuration paths and duplicate helper layers. Do not change public API responses or database schema this week.”
That is far better than:
“Refactor billing to be simpler.”
Turn a rough goal into a strong reducing-entropy prompt
A good reducing-entropy usage prompt usually has five parts:
- explicit activation
- target scope
- protected behavior
- deletion permission
- output format
Example:
“Apply the reducing-entropy skill. Load one reference mindset first and tell me which one you chose. Analyze src/cache/ and src/session/ for the smallest codebase that still supports current login/session behavior. Prefer deletion over reorganization. Reject options that increase total code even if they look cleaner. Give me:
- the smallest end-state design
- what to delete
- what to merge
- risks
- rough before/after code footprint”
Suggested workflow for real refactoring work
A reliable workflow is:
- read
SKILL.md - choose one reference mindset
- inspect the current module boundaries
- list behavior that must survive
- ask the three core questions from the skill
- produce 2–3 candidate end states
- compare them by net code reduction
- implement the smallest viable result
- re-check for leftover abstractions and dead paths
This prevents a common failure mode: jumping into edits before deciding what the smallest surviving design is.
The three questions to force in every run
The repository centers the skill around three checks. In practice, make them explicit in your prompt:
- What is the smallest codebase that solves this?
- Does the change result in less total code?
- What can we delete?
If you do not force those questions, the output often drifts back to standard refactoring advice.
Where reducing-entropy works best
Best-fit tasks include:
- collapsing duplicate modules into one simpler path
- removing wrappers, factories, managers, and thin abstractions
- replacing custom structures with plain data plus functions
- deleting underused features or configurability
- simplifying a tangled subsystem before adding new work
This is why reducing-entropy for Refactoring is the strongest fit: it is more about redesigning the end state than polishing local code style.
When not to use reducing-entropy
Avoid this skill when the primary job is:
- adding a new capability with uncertain future requirements
- preserving a stable extension surface for third parties
- designing foundational concerns that are expensive to retrofit
- making code more readable without permission to delete or merge behavior
In those cases, the deletion bias can become a mismatch rather than a strength.
Practical files to read first in the repository
For fastest understanding, follow this reading order:
SKILL.mdREADME.mdreferences/simplicity-vs-easy.mdreferences/design-is-taking-apart.mdreferences/data-over-abstractions.md
Read adding-reference-mindsets.md only if you want to understand how the authors think about the philosophical support files.
Tips that materially improve output quality
Three tactics make the biggest difference:
- Ask for a smallest end-state architecture before asking for code changes.
- Require explicit deletions, not just simplifications.
- Make the model estimate what disappears: files, functions, classes, branches, configs.
That turns the skill from a style nudge into an actual reduction exercise.
reducing-entropy skill FAQ
Is reducing-entropy better than a normal refactor prompt?
Usually, yes, when your objective is net simplification. A generic prompt often proposes cleaner layering, better naming, or more reusable abstractions. reducing-entropy is better when those moves would grow the codebase and you want the model to resist them.
Is reducing-entropy suitable for beginners?
Yes, if the beginner already understands the current system well enough to state protected behavior and scope. The skill’s framework is simple, but good results depend on knowing what can be safely removed.
Does reducing-entropy only mean deleting code?
No. It can justify writing some code if that enables much more deletion overall. The key test is the final state. Small additions are acceptable if they replace larger structures.
Can I use reducing-entropy for greenfield work?
Usually not as the primary guide. It is stronger for pruning or simplifying an existing codebase than for designing a new one from scratch.
How does reducing-entropy compare with ordinary cleanup work?
Ordinary cleanup often optimizes local readability or organization. The reducing-entropy skill optimizes for fewer concepts, fewer structures, and less total code. Those goals overlap, but they are not the same.
What are the main risks before I install it?
The big risks are:
- deleting flexibility you actually need
- oversimplifying around future requirements
- measuring line count too mechanically
- removing structure that exists for real operational reasons
That is why the reference file expensive-to-add-later.md matters. It gives a principled exception to pure deletion bias.
Does reducing-entropy fit every repository?
No. It fits best where code growth is the problem. It is less suitable in heavily regulated, public-platform, or highly extensible systems where explicit structure may be part of the product requirement.
How to Improve reducing-entropy skill
Give reducing-entropy sharper boundaries
The fastest way to improve reducing-entropy usage is to define what must not change. Without that, the model may propose deleting valuable behavior.
Useful boundary statements include:
- “Preserve API shape.”
- “No schema changes.”
- “Keep test coverage expectations.”
- “User-visible behavior must stay identical.”
Clear boundaries let the skill be aggressive safely.
Ask for end-state comparisons, not a single answer
Instead of asking for one recommendation, ask for two or three candidate end states ranked by:
- total code reduction
- migration cost
- risk of breaking behavior
- maintenance burden
This exposes tradeoffs and helps you reject a “smallest” design that is too risky for now.
Provide codebase signals that reveal entropy
The skill improves when you point to signs of bloat, such as:
- duplicate logic across modules
- wrapper classes with minimal behavior
- config branches for unused modes
- helper layers that only forward calls
- custom types where plain data would work
These clues help the model target real simplification opportunities instead of performing cosmetic edits.
Watch for common failure modes
The most common bad outputs are:
- reorganizing code into more files
- adding abstractions to “prepare for growth”
- preserving dead compatibility paths
- proposing cleaner names without reducing structure
- treating “less churn” as the goal
If you see these, restate the core metric: less total code in the final codebase.
Use the reference files strategically
Better results come from choosing the right mindset for the problem:
- use
data-over-abstractions.mdto challenge class-heavy designs - use
design-is-taking-apart.mdto break apart mixed responsibilities - use
simplicity-vs-easy.mdwhen the familiar solution is too coupled - use
expensive-to-add-later.mdto defend the few things worth keeping
This is one of the strongest parts of the repository and worth using explicitly, not passively.
Ask for deletion candidates by category
A high-yield prompt pattern is:
“List deletion candidates by category: feature, abstraction, config, compatibility path, helper, type, and file.”
That structure pushes the model to look beyond local code edits and find broader reduction opportunities.
Iterate after the first output
After the first pass, ask follow-up questions like:
- “What remains that exists only to support the old design?”
- “Which abstractions are now redundant?”
- “What can be merged further without changing behavior?”
- “What would you remove if you had to cut this module by 30%?”
These second-round questions often surface the real gains.
Validate with net complexity, not just line count
Line count matters here, but do not use it blindly. The best improvements also reduce:
- concepts to learn
- module hops to trace behavior
- special cases
- branching paths
- dependency surface
A smaller codebase that is still tangled is only a partial win. The best reducing-entropy guide use combines deletion with decoupling.
