A

context-engineering

by addyosmani

The context-engineering skill helps you structure project context so agents follow conventions, reduce hallucinations, and stay focused. Use it when starting a session, switching tasks, or building a context-engineering guide for a codebase.

Stars18.7k
Favorites0
Comments0
AddedApr 21, 2026
CategoryContext Engineering
Install Command
npx skills add addyosmani/agent-skills --skill context-engineering
Curation Score

This skill scores 78/100, which means it is a solid listing candidate for directory users: it provides a real, actionable workflow for setting up and improving agent context, with enough specificity to justify installation, though it is not fully operationalized with scripts or companion files.

78/100
Strengths
  • Strong triggerability: the frontmatter explicitly says when to use it, including new sessions, degraded output quality, switching tasks, and project setup.
  • Operational guidance is concrete: it defines a context hierarchy and describes how to structure information from rules files through conversation history.
  • Substantial workflow content: the body is long, well-structured, and includes headings, code fences, repo/file references, and constraint signals rather than placeholder text.
Cautions
  • No install command or support files: there are no scripts, references, resources, or rule assets to automate adoption.
  • Some implementation details appear incomplete in the excerpt, so users may still need to adapt the guidance to their own toolchain and project conventions.
Overview

Overview of context-engineering skill

What context-engineering is

The context-engineering skill helps you give an AI agent the right project context at the right time so output is more accurate, more consistent, and less guessy. It is most useful when you are setting up AI-assisted work in a codebase, restarting a session, or fixing a drift in quality caused by weak or noisy context.

Who this skill fits

Use the context-engineering skill if you want a practical context setup process, not just a generic prompt. It fits engineers, repo maintainers, and power users who need an agent to respect conventions, follow local patterns, and stop hallucinating around architecture, APIs, or file structure.

Why it matters

Most agent failures come from missing or poorly ordered context. This skill focuses on context hierarchy, so the agent sees durable project rules first and task-specific evidence later. That makes the context-engineering guide especially valuable when you want a repeatable system instead of ad hoc prompt tuning.

What makes it different

This is not a broad prompt-writing guide. The context-engineering skill is centered on context selection, ordering, and reuse: what should live in rules files, what belongs in feature docs, what should come from source files, and what should be refreshed from test output or errors.

How to Use context-engineering skill

Install context-engineering first

Use the repo’s skill installer so the context-engineering install step is tied to the official package source, not a copied prompt snippet. The baseline command shown in the repository is:
npx skills add addyosmani/agent-skills --skill context-engineering

Start with the right files

Read SKILL.md first, then trace any linked references in the repo tree if present. For this skill, the practical reading path is usually:
SKILL.md → any repo-level guidance it points to → the section on context hierarchy → the section on rules files and task scoping.

Turn a rough goal into usable input

The context-engineering usage pattern works best when you tell the agent three things: the task, the code area, and the constraint. For example, instead of “help me set up context,” use “configure context for a React app, prefer existing conventions, and keep rules small enough for repeated sessions.” That gives the skill enough signal to choose durable context over noisy history.

Use the hierarchy deliberately

The core context-engineering for Context Engineering idea is to layer context from stable to temporary: project rules, feature docs, relevant source, then errors or test results. In practice, this means you should avoid dumping everything into one prompt. Give the agent the smallest set of files that prove the current convention, then add iteration evidence only after the first response.

context-engineering skill FAQ

Is context-engineering just a prompt template?

No. The context-engineering skill is more useful as a workflow for deciding what context belongs where. A plain prompt can ask for the same outcome, but it will not give you the same repeatable structure for rules, source selection, and session resets.

When should I not use it?

Do not use context-engineering if your task is tiny, self-contained, or does not depend on repository conventions. If the agent only needs one file or one direct answer, the overhead of building a full context hierarchy may be unnecessary.

Is it beginner friendly?

Yes, if you already know the problem is context quality rather than model capability. The skill is easiest to adopt when you can identify what the agent missed: rules, architecture, relevant files, or recent error output.

Does it fit every repository?

No. It works best in active codebases where conventions matter and agent mistakes are costly. If a repo has little structure or no recurring patterns, the context-engineering guide will still help, but the gains will be smaller.

How to Improve context-engineering skill

Give the skill stronger source material

The biggest improvement comes from better input selection. Provide a short set of files that show the real pattern you want followed, plus any rule file or architecture note that should override guesswork. That is more effective than broad repository dumps.

Be explicit about failure mode

If the agent is drifting, say how: wrong API style, ignoring folder conventions, over-editing, or missing test expectations. The context-engineering skill responds better when you name the broken behavior than when you only ask for “better context.”

Iterate with evidence, not repetition

After the first output, feed back the exact error, lint failure, test result, or mismatch that matters. This improves context-engineering usage because the next pass can promote the right transient context instead of rephrasing the same request.

Keep rules durable and scoped

The best results come from small rules that are hard to misread and easy to keep loaded. If a rule is too broad, it weakens the whole setup; if it is too narrow, it will not help the next session. Use context-engineering to separate long-lived project norms from one-off task details.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...