prompt-engineering-patterns
by wshobsonprompt-engineering-patterns is a practical skill for production prompt design, covering install context, reusable templates, few-shot examples, structured outputs, and prompt optimization workflows for Context Engineering.
This skill scores 82/100, which means it is a solid directory listing candidate: agents get clear triggers, substantial operational content, and reusable prompt assets that provide more execution leverage than a generic prompt, though adopters should expect to assemble techniques rather than follow one tightly defined end-to-end workflow.
- Strong triggerability: SKILL.md explicitly says when to use it for prompt optimization, few-shot design, system prompts, structured outputs, and debugging inconsistent LLM behavior.
- High practical leverage: the repository includes reusable assets and references such as a prompt template library, few-shot example JSON, and an optimize-prompt.py script.
- Good progressive disclosure: the main skill introduces major patterns, then reference docs drill into concrete techniques like CoT, few-shot selection, prompt templates, optimization, and system prompt design with examples.
- Broad scope can increase guesswork: it covers many prompt engineering topics, but the evidence shows more pattern library/reference material than a single prescriptive execution flow.
- Some examples are conceptual and code-oriented rather than clearly integrated into one install-ready agent workflow, and SKILL.md shows no install command.
Overview of prompt-engineering-patterns skill
The prompt-engineering-patterns skill is a practical prompt design guide for building more reliable LLM workflows, not just a bag of generic prompting tips. It is best for people creating production prompts, structured extraction flows, reusable prompt templates, or evaluation loops where output consistency matters more than one-off creativity.
Who this skill is for
Use prompt-engineering-patterns if you are:
- designing prompts for apps, agents, or internal automation
- trying to reduce output drift, formatting failures, or weak reasoning
- choosing between few-shot examples, chain-of-thought, system prompts, and structured outputs
- turning ad hoc prompts into repeatable templates your team can maintain
If you only need a quick single-use prompt, this skill may be more than you need.
What job it helps you get done
The real job-to-be-done is to move from “the model sometimes works” to “the model usually behaves predictably enough to ship.” The repository does that by organizing prompt patterns around concrete use cases: few-shot learning, chain-of-thought prompting, JSON-style structured outputs, reusable templates, system prompt design, and prompt optimization workflows.
What makes it different from ordinary prompt advice
The main differentiator is that prompt-engineering-patterns is structured like an implementation playbook. It includes:
- reference docs for major prompting patterns
- example assets you can adapt immediately
- a prompt template library by task type
- a Python optimization script for iterative refinement
That makes it more useful for installation decisions than skills that only describe concepts without reusable artifacts.
What to check before adopting
This skill is strongest when you already know your task, output shape, and success criteria. It is weaker as a “tell me what to build” brainstorming aid. Before installing, ask:
- Do you need predictable formats or measurable improvements?
- Do you have sample inputs and expected outputs?
- Are you willing to test prompts against a small evaluation set?
If yes, prompt-engineering-patterns for Context Engineering is a good fit because it helps you formalize context, examples, constraints, and output contracts.
How to Use prompt-engineering-patterns skill
Install context for prompt-engineering-patterns
This skill lives in wshobson/agents under plugins/llm-application-dev/skills/prompt-engineering-patterns.
Install it with:
npx skills add https://github.com/wshobson/agents --skill prompt-engineering-patterns
Because the upstream SKILL.md does not provide an install command, directory users should treat the command above as the practical prompt-engineering-patterns install path.
Read these files first
Start with the highest-signal files in this order:
SKILL.mdassets/prompt-template-library.mdassets/few-shot-examples.jsonreferences/prompt-optimization.mdreferences/system-prompts.md
Then read deeper references only for the pattern you actually need:
references/few-shot-learning.mdreferences/chain-of-thought.mdreferences/prompt-templates.md
This reading path saves time because the assets show what you can reuse immediately, while the references explain why those patterns work.
What input the skill needs from you
prompt-engineering-patterns usage gets much better when you bring specific task inputs. At minimum, provide:
- the exact task
- target audience or operating role
- desired output format
- hard constraints
- 3 to 10 representative examples or test cases
- known failure cases
Weak input:
- “Improve this prompt.”
Strong input:
- “I need a support-ticket classifier. Labels are
billing,technical_issue,account_access, andother. Output must be valid JSON withlabelandconfidence. Common failures: mixing labels, adding prose, and mishandling multi-intent tickets.”
The second version gives the skill enough context to recommend the right pattern instead of generic rewrites.
Choose the right pattern for the task
Use the repository patterns selectively:
- Use few-shot examples when task behavior depends on format, style, or borderline decisions.
- Use chain-of-thought for multi-step reasoning, logic, or math-heavy tasks.
- Use structured outputs when downstream systems parse the result.
- Use system prompts when you need stable role, tone, safety, or behavioral boundaries.
- Use template systems when the same prompt is filled repeatedly with changing variables.
A common mistake is stacking all patterns at once. Start with the smallest pattern that solves the failure you actually have.
Turn a rough goal into a usable prompt brief
Before invoking the skill, rewrite your goal into five parts:
Task: what the model must doContext: background or domain assumptionsConstraints: things it must avoid or always includeOutput contract: exact formatExamples: representative inputs and ideal outputs
Example brief:
Task: Extract entities from customer complaint emails.
Context: Emails may mention products, stores, dates, refund amounts, and staff names.
Constraints: Do not infer missing fields. Return empty arrays instead of null.
Output contract: Valid JSON with keys persons, products, locations, dates, monetary_values.
Examples: Include at least one email with no monetary value and one with multiple products.
This is the level of specificity that makes prompt-engineering-patterns skill materially better than a generic “write me a prompt” request.
Use the template library as a starting point, not an endpoint
assets/prompt-template-library.md is most useful when treated as scaffolding. Copy a close template, then add:
- your real schema
- task-specific constraints
- edge-case handling
- refusal behavior for missing information
For example, the extraction templates become stronger if you explicitly say whether the model should omit unknown fields, return empty values, or quote source text spans.
Apply few-shot examples with intent
The repo includes assets/few-shot-examples.json, which is valuable less for the exact examples and more for how examples are shaped. Good few-shot sets should:
- mirror your real input distribution
- cover edge cases, not just obvious positives
- keep label definitions consistent
- avoid noisy or contradictory examples
If your task fails on borderline cases, add examples for those boundaries first. That usually outperforms simply adding more average examples.
Use chain-of-thought carefully in production
The repository’s references/chain-of-thought.md is useful for reasoning tasks, but not every production system should expose full reasoning traces. In practice:
- use explicit reasoning prompts for internal analysis and debugging
- use concise answer formats for user-facing outputs
- test whether chain-of-thought improves accuracy enough to justify extra tokens and latency
For many teams, the best production pattern is hidden internal reasoning plus a strict final answer format.
Use the optimization script as a workflow signal
The file scripts/optimize-prompt.py and references/prompt-optimization.md indicate the intended workflow: establish a baseline, test against a suite, analyze failures, refine, and repeat.
Even if you do not use the exact script, copy the process:
- define a baseline prompt
- build a small test set
- measure format validity and task accuracy
- inspect failure clusters
- revise one variable at a time
This is the biggest practical value in the repo: it pushes you toward measurable prompt improvement instead of endless subjective tweaking.
Best workflow for Context Engineering
prompt-engineering-patterns for Context Engineering works best when context is curated, not dumped. A strong workflow is:
- define the task and output contract
- add only the context needed to complete that task
- include examples that teach the behavior you want
- separate stable instructions from dynamic user input
- evaluate with realistic cases
- trim context that does not change outcomes
This matters because long prompts often fail not from too little context, but from poorly organized context.
prompt-engineering-patterns skill FAQ
Is prompt-engineering-patterns good for beginners?
Yes, if you already have a concrete task. The examples and references are approachable, and the pattern breakdown helps beginners stop guessing. It is less suitable for absolute beginners who have never defined schemas, labels, or evaluation criteria.
How is this different from just writing a better prompt?
Ordinary prompt advice usually stops at wording improvements. prompt-engineering-patterns guide material goes further by showing pattern selection, reusable templates, example design, and iterative optimization. That makes it better for repeatable systems, not just one-off chats.
When should I not use prompt-engineering-patterns?
Skip it when:
- you need open-ended ideation more than control
- your task changes every time with no reusable structure
- you do not yet know the desired output format
- you are unwilling to test prompts against examples
In those cases, a simpler exploratory prompting workflow may be faster.
Does it support structured outputs well?
Yes. The repository repeatedly points toward JSON-like extraction and constrained formatting. It is especially relevant if your downstream code needs parseable responses and your current prompts often return extra prose.
Is prompt-engineering-patterns tied to one model vendor?
No clear evidence suggests vendor lock-in. The patterns are portable across most modern LLMs, though exact behavior will vary by model. You should still validate token cost, formatting reliability, and reasoning quality on your chosen provider.
How to Improve prompt-engineering-patterns skill
Give the skill a tighter problem statement
The fastest way to improve prompt-engineering-patterns results is to stop asking for “better prompts” in the abstract. Supply:
- success criteria
- unacceptable outputs
- a target schema
- representative failures
This allows the skill to recommend the right pattern and produce prompts that survive real usage.
Add evaluation cases before rewriting prompts
Users often rewrite prompts too early. Instead, collect 10 to 20 examples that include:
- easy cases
- confusing near-misses
- malformed inputs
- cases that currently fail
Then use these examples to compare prompt versions. The repository’s optimization material strongly supports this test-driven approach.
Separate stable instructions from variable context
A common failure mode is mixing role, task rules, examples, and user data into one blob. Improve quality by separating:
- system behavior
- reusable task instructions
- few-shot demonstrations
- current input
That structure makes prompts easier to debug and reduces accidental instruction drift.
Strengthen examples instead of adding more examples
More few-shot data is not always better. Replace weak examples that are redundant or unrealistic with examples that cover:
- edge cases
- ambiguous inputs
- exact output formatting
- common model mistakes
Higher-quality demonstrations usually improve results more than larger demonstration sets.
Tighten output contracts
If outputs are inconsistent, the issue is often underspecified formatting. Improve the prompt by defining:
- required keys
- allowed labels
- ordering rules
- what to do when information is missing
For extraction tasks, “return empty arrays for missing categories” is much better than “extract entities in JSON.”
Fix one failure mode per iteration
Do not change role, schema, examples, reasoning style, and temperature assumptions all at once. Change one main variable, retest, and log the effect. This mirrors the repo’s iterative refinement logic and makes improvements easier to trust.
Watch for overengineering
prompt-engineering-patterns skill is powerful, but users sometimes overapply it. Warning signs:
- very long prompts with repeated instructions
- too many examples for simple tasks
- chain-of-thought on tasks that only need extraction
- excessive templating before the task is stable
If a simpler prompt achieves the same reliability, use the simpler prompt.
Use the repository as a pattern catalog, not a script to copy
The best way to improve with prompt-engineering-patterns is to adapt its assets and references to your own failure modes. Read the repo to choose a pattern, borrow a template, then test it against your data. That is much more effective than copying the examples verbatim and hoping they generalize.
