prompt-optimizer
by affaan-mprompt-optimizer is a prompt-optimizer skill that analyzes rough prompts, finds missing context, and rewrites them into clearer, ready-to-paste prompts. It is best for prompt-optimizer guide work, prompt review, and prompt-optimizer for Prompt Writing, especially when you need better structure for Claude Code or ECC workflows. It does not execute the underlying task.
This skill scores 78/100, which means it is a solid listing candidate for directory users: it has clear trigger rules, a well-defined prompt-optimization workflow, and enough operational guidance to reduce guesswork versus a generic prompt. Users should still expect a narrowly scoped advisory skill rather than a task-execution tool.
- Explicit trigger and non-trigger rules make it easy for agents to invoke correctly, including English and Chinese variants.
- The skill explains its advisory workflow clearly: analyze intent, identify gaps, match ECC components, and produce a ready-to-paste optimized prompt.
- Substantial body content with headings, constraints, and examples suggests real operational guidance rather than a placeholder.
- It is explicitly advisory only and does not execute the requested task, so it is limited to prompt rewriting/analysis use cases.
- No install command, scripts, or support files are provided, so adoption relies entirely on reading and following SKILL.md.
Overview of prompt-optimizer skill
What prompt-optimizer does
The prompt-optimizer skill turns a rough prompt into a stronger, ready-to-paste version. It is designed for prompt review, gap-finding, and rewrite work, not for executing the underlying task. If you need a cleaner ask for Claude Code or another AI workflow, the prompt-optimizer skill helps you shape intent, constraints, and output format before you run the task.
Who it is best for
Use prompt-optimizer if you already know what you want but your prompt is vague, incomplete, or easy for an AI to misread. It is especially useful for people writing prompts for coding tasks, agent workflows, or structured outputs where missing details cause bad results. It is less useful when you want the model to simply do the work immediately.
Main differentiator
The key value of prompt-optimizer is that it focuses on prompt quality, not task completion. It checks whether the request is specific enough, flags missing context, and aligns the ask with ECC ecosystem components like skills, commands, agents, and hooks. That makes it a practical prompt-optimizer guide for users who care about better downstream execution rather than generic rewriting.
How to Use prompt-optimizer skill
Start with the right install context
For a prompt-optimizer install, add the skill into the Claude Code skill set from the repository path skills/prompt-optimizer. The repo does not ship extra scripts or support folders, so the skill itself is the primary source of behavior. Start by reading SKILL.md and treat the frontmatter and trigger rules as the contract for when the skill should activate.
Give it a draft, goal, and constraints
The best prompt-optimizer usage starts with a rough prompt plus the real target outcome. Include the task, audience, required output format, constraints, and any do-not-do rules. A weak input is: “make this prompt better.” A stronger input is: “Rewrite this prompt for Claude Code so it outputs a concise Python refactor plan, preserves existing behavior, and asks clarifying questions only if the API contract is unclear.” The second version gives the skill material it can actually optimize.
Read SKILL.md first
Because this repository is intentionally lean, the fastest path is to read SKILL.md first, then inspect the trigger section, When to Use, Do Not Use When, and the analysis workflow. Those parts tell you what counts as a valid prompt-optimization request and where the skill should refuse to help. If you are adapting the skill for your own environment, mirror those boundaries instead of converting it into a generic prompt rewriter.
Use a two-pass workflow
First pass: submit the draft prompt and ask for a critique plus a rewritten version. Second pass: feed back what the model missed, such as missing scope, output length, format, or tool constraints. This loop is the most reliable way to improve prompt precision with prompt-optimizer for Prompt Writing, especially when the first draft is ambiguous or overloaded.
prompt-optimizer skill FAQ
Is prompt-optimizer for execution or rewriting?
It is for rewriting. The prompt-optimizer skill analyzes a request and improves the prompt, but it should not be used when you want the model to directly perform the task. If your goal is “just do it,” this skill is the wrong fit.
How is it different from a normal prompt edit?
A normal prompt edit often only improves wording. prompt-optimizer is more structured: it looks for missing intent, unclear scope, and the right ECC component to invoke. That makes it more useful when you need an AI-ready prompt rather than a nicer sentence.
When should I not use prompt-optimizer?
Do not use it for code refactoring, performance tuning, or any request where “optimize” means improve the software itself. The skill’s trigger rules explicitly exclude those cases. It is also a poor fit if you already have a precise, complete prompt and do not need revision.
Is it beginner friendly?
Yes, if you can paste a draft and say what result you expect. You do not need deep ECC knowledge to benefit from prompt-optimizer; the skill is meant to surface that structure for you. It works best when you can provide even minimal context about the task and desired output.
How to Improve prompt-optimizer skill
Provide better input, not just more input
The biggest quality gain comes from clearer source prompts. Include the task type, target model or environment, audience, output structure, and hard constraints. For example, “write a one-page plan with bullets and risks” is better than “make this better” because it gives prompt-optimizer something concrete to preserve.
State the failure mode you want to avoid
If the prompt keeps producing the wrong answer, say why. Common failure modes are too much verbosity, missing edge cases, wrong tool assumptions, or skipping clarifying questions. Naming the failure helps prompt-optimizer rewrite around the real problem instead of only polishing language.
Ask for a rewritten prompt plus rationale
The most useful output is usually a revised prompt and a short explanation of what changed. That lets you decide whether the rewrite improved scope, structure, or constraints before you paste it into your next run. If the optimized version is still off, iterate by tightening one missing piece at a time.
Keep the skill aligned with its trigger rules
The prompt-optimizer skill improves results when the request is genuinely about prompt design. If you are mixing prompt help with direct task execution, split them into separate steps. That keeps the skill focused and makes the final prompt easier for an agent to follow.
