Context Engineering

Browse agent skills tagged with Context Engineering and compare related workflows across the directory.

27 skills
A
strategic-compact

by affaan-m

strategic-compact helps you decide when to manually run /compact so Claude sessions stay coherent at task boundaries. It supports long, multi-phase work, especially research, planning, implementation, and testing, and is useful for workflow automation when you want compaction to happen at logical milestones instead of arbitrary auto-compaction.

Workflow Automation
Favorites 0GitHub 156.3k
A
iterative-retrieval

by affaan-m

iterative-retrieval is a workflow pattern for progressively refining context retrieval in agentic work. It helps subagents avoid too much or too little context, making it useful for iterative-retrieval usage, install decisions, and iterative-retrieval for Workflow Automation.

Workflow Automation
Favorites 0GitHub 156.2k
A
gateguard

by affaan-m

gateguard is a fact-forcing pre-action gate for Claude workflows. It blocks the first Edit, Write, or Bash attempt, then requires concrete investigation of importers, schemas, user instructions, and related files before allowing changes. Use this gateguard guide to reduce guessing and improve first-pass edits.

Workflow Automation
Favorites 0GitHub 156.2k
A
documentation-lookup

by affaan-m

documentation-lookup helps agents answer library, framework, and API questions from current docs instead of memory. It is ideal for setup, configuration, reference, and code-example tasks when the latest syntax matters. Use the documentation-lookup skill for Skill Docs when a request depends on live documentation and version-accurate guidance.

Skill Docs
Favorites 0GitHub 156.1k
A
codebase-onboarding

by affaan-m

codebase-onboarding analyzes an unfamiliar repo and generates a structured onboarding guide with an architecture map, key entry points, conventions, and a starter CLAUDE.md. Use it when joining a new project or setting up Claude Code for the first time in a repository.

Onboarding Wikis
Favorites 0GitHub 156.1k
A
blueprint

by affaan-m

blueprint turns a one-line objective into a step-by-step construction plan for complex engineering work. It is built for multi-session, multi-PR tasks, refactors, migrations, and blueprint for Project Setup when a fresh agent needs context, dependency order, parallel-step detection, and review gates.

Project Setup
Favorites 0GitHub 156.1k
O
using-superpowers

by obra

using-superpowers is a session-start skill from obra/superpowers that forces skill lookup before any reply, helping agents discover and activate the right workflow first.

Skill Discovery
Favorites 0GitHub 121.9k
O
subagent-driven-development

by obra

subagent-driven-development is a skill for executing implementation plans with a fresh subagent per task, then reviewing each result in two passes: spec compliance first, code quality second. It includes prompt templates for the implementer, spec reviewer, and code quality reviewer.

Agent Orchestration
Favorites 0GitHub 121.8k
T
smart-explore

by thedotmack

smart-explore is a structural code exploration skill that uses smart_search, smart_outline, and smart_unfold to map a codebase before reading full files. It helps with code navigation, targeted debugging, and smart-explore for Code Review when MCP tool support is available.

Code Review
Favorites 0GitHub 43.9k
M
multi-agent-patterns

by muratcankoylan

The multi-agent-patterns skill helps you design and implement agent systems with Agent Orchestration, context isolation, parallel work, and structured handoffs. Use it when choosing between a single agent and a multi-agent setup, or when you need supervisor routing, peer handoffs, consensus, or fault handling. It is best for orchestration-heavy tasks where clear coordination matters more than adding agents.

Agent Orchestration
Favorites 0GitHub 15.6k
T
audit-context-building

by trailofbits

audit-context-building builds deep, line-by-line code context before vulnerability hunting. The audit-context-building skill helps security auditors, architecture reviewers, and agents reduce false assumptions, track invariants, and prepare reliable review context before findings, fixes, or threat modeling.

Security Audit
Favorites 0GitHub 4.9k
M
skill-optimizer

by mcollina

skill-optimizer helps authors improve AI skills for activation, clarity, and cross-model reliability. Use it for Skill Authoring when a skill is written but not reliably followed, when triggers are weak, regressions appear, or context cost needs trimming. It supports benchmark loops, release gates, and tighter usage fidelity.

Skill Authoring
Favorites 0GitHub 1.8k
S
skill-judge

by softaworks

skill-judge is a review and scoring skill for auditing AI skill packages and SKILL.md files. It helps authors and maintainers judge knowledge delta, activation clarity, workflow quality, and publish readiness with actionable improvement guidance.

Skill Validation
Favorites 0GitHub 1.3k
S
command-creator

by softaworks

command-creator helps turn repeated Claude Code workflows into reusable slash commands. Learn the right command pattern, write agent-executable instructions, choose between .claude/commands/ and ~/.claude/commands/, and use the bundled references for examples and best practices.

Skill Authoring
Favorites 0GitHub 1.3k
N
tree-of-thoughts

by NeoLabHQ

tree-of-thoughts is a reasoning workflow skill that helps agents explore multiple approaches, prune weak branches, and synthesize a better answer. It is useful for hard debugging, planning, architecture tradeoffs, and tree-of-thoughts for Agent Orchestration.

Agent Orchestration
Favorites 0GitHub 982
N
launch-sub-agent

by NeoLabHQ

launch-sub-agent helps you dispatch a focused sub-agent for bounded tasks in multi-agent systems. It analyzes task complexity, selects an appropriate model tier, supports specialized agent matching, and adds self-critique verification for more reliable results.

Multi-Agent Systems
Favorites 0GitHub 982
N
judge

by NeoLabHQ

Judge is a two-phase evaluation skill that launches a meta-judge first, then a judge sub-agent to score work with isolated context, evidence, and clear criteria. Use it for report-only reviews of code, writing, analysis, or Skill Authoring when you need a defensible judge guide instead of a casual opinion.

Skill Authoring
Favorites 0GitHub 982
N
do-competitively

by NeoLabHQ

do-competitively helps you solve important tasks with parallel candidate generation, rubric-based judging, and evidence-based synthesis. It is best for Workflow Automation and other high-stakes requests where quality, robustness, and tradeoff handling matter more than speed.

Workflow Automation
Favorites 0GitHub 982
M
parse-knowledge

by MarsWang42

parse-knowledge turns messy text into structured Markdown notes for an OrbitOS-style knowledge base, splitting source material into a main research note plus linked atomic wiki notes with YAML frontmatter and vault-ready paths.

Knowledge Bases
Favorites 0GitHub 690
M
evaluation

by muratcankoylan

The evaluation skill helps you design and run agent evaluations for non-deterministic systems. Use it for evaluation install planning, rubrics, regression checks, quality gates, and evaluation for Skill Testing. It fits LLM-as-judge workflows, multi-dimensional scoring, and practical evaluation usage when you need repeatable results.

Skill Testing
Favorites 0GitHub 0
M
init

by mcollina

init helps create or improve AGENTS.md files by keeping only non-discoverable repo rules, workflow gotchas, and tool quirks. Use the init skill when setting up agent instructions, pruning stale guidance, or refining Claude configuration for a repository.

Skill Authoring
Favorites 0GitHub 0
N
brainstorm

by NeoLabHQ

Use brainstorm to turn rough ideas into workable designs before coding or implementation plans. It fits brainstorming, product discovery, architecture exploration, and brainstorm for Strategic Planning by asking one question at a time, exploring options and tradeoffs, and validating each step. Avoid it for clear mechanical work.

Strategic Planning
Favorites 0GitHub 0
N
reflect

by NeoLabHQ

reflect is a Skill Validation tool for reviewing a prior response or output. It uses complexity triage and verification to catch missed flaws, weak reasoning, and overconfident approval before work ships.

Skill Validation
Favorites 0GitHub 0
N
memorize

by NeoLabHQ

memorize is a Skill Authoring and agent workflow skill that curates reflections, critiques, and execution feedback into durable, actionable guidance in CLAUDE.md using Agentic Context Engineering. Use it when lessons should survive beyond one chat and improve future runs.

Skill Authoring
Favorites 0GitHub 0
Context Engineering tagged agent skills