A

iterative-retrieval

by affaan-m

iterative-retrieval is a workflow pattern for progressively refining context retrieval in agentic work. It helps subagents avoid too much or too little context, making it useful for iterative-retrieval usage, install decisions, and iterative-retrieval for Workflow Automation.

Stars156.2k
Favorites0
Comments0
AddedApr 15, 2026
CategoryWorkflow Automation
Install Command
npx skills add affaan-m/everything-claude-code --skill iterative-retrieval
Curation Score

This skill scores 84/100, which means it is a solid listing candidate for Agent Skills Finder. Directory users get a clearly triggered, workflow-oriented pattern for iterative context retrieval in multi-agent and codebase exploration tasks, with enough detail to decide it is worth installing, though it would benefit from stronger adoption aids and implementation hooks.

84/100
Strengths
  • Clear activation scenarios for subagents, multi-agent workflows, and context-too-large/missing-context failures
  • Concrete 4-phase iterative retrieval loop gives agents a usable execution pattern instead of a vague prompt
  • Substantial skill body with valid frontmatter and no placeholder/demo markers suggests real workflow content
Cautions
  • No install command, scripts, or support files, so users must infer integration steps from the SKILL.md alone
  • The repository excerpt shows pattern guidance but limited operational artifacts like examples, tests, or references to verify edge cases
Overview

Overview of iterative-retrieval skill

The iterative-retrieval skill is a workflow pattern for solving the “context problem” in agentic work: a subagent starts with too little information to know what it needs, then progressively narrows in on the right files, terms, and patterns. It is best for workflow designers, codebase explorers, and anyone building iterative-retrieval for Workflow Automation where first-pass retrieval is usually incomplete.

What users usually care about is not the theory, but whether the skill helps an agent avoid two common failures: sending too much context and blowing the budget, or sending too little and stalling. The main value of iterative-retrieval is that it turns discovery into a loop instead of a one-shot guess.

What iterative-retrieval solves

Use this skill when the task depends on codebase-specific context that cannot be known in advance: locating implementation patterns, identifying relevant files, or refining search terms after the first probe. It is especially useful when an agent must reason over a large repo without direct human guidance.

Why this skill is different

Unlike a generic prompt that says “look around and then decide,” iterative-retrieval gives a concrete retrieval loop: dispatch, evaluate, refine, repeat. That makes it easier to orchestrate subagents, especially when your process needs predictable context growth rather than broad, noisy dumps.

Best-fit use cases

This skill fits architecture discovery, RAG-style code exploration, and multi-agent workflows where the first retrieval pass is intentionally incomplete. It is less useful when the answer is already local, the repo is tiny, or you can provide the exact file list up front.

How to Use iterative-retrieval skill

Install and activate it

Use the skill install path from your skill manager, then point your agent workflow at skills/iterative-retrieval/SKILL.md. A typical iterative-retrieval install pattern in this repository is:

npx skills add affaan-m/everything-claude-code --skill iterative-retrieval

For best results, invoke it when the job depends on context discovery, not after you have already hand-curated all relevant files.

Turn a vague goal into a usable prompt

The skill works best when your prompt gives the agent a target, a boundary, and a stopping rule. Strong input looks like this:

  • Goal: “Find the auth flow and explain where token refresh is handled.”
  • Boundary: “Search only production code, not tests.”
  • Constraint: “Keep each retrieval pass under a few files.”
  • Success condition: “Return the smallest file set that supports a confident answer.”

This matters because iterative-retrieval usage is about refining context, not asking the model to infer the whole repo from one vague request.

Read these files first

Start with SKILL.md, then inspect any supporting docs the repo provides. In this repo, the practical entry point is still SKILL.md; if your installation copies only the skill body, that is the source of truth. After that, review nearby workflow docs if they exist in your environment so you can align the loop with your own orchestration rules.

Work the retrieval loop

A good workflow is: dispatch a narrow search, evaluate whether the returned context is enough, refine the next search based on what was missing, then loop until the agent has enough evidence to act. The key is to carry forward the new terms discovered in each pass rather than repeating the same query with different wording.

iterative-retrieval skill FAQ

Is iterative-retrieval only for large codebases?

No. Size matters, but the real trigger is uncertainty. If the agent cannot predict which files matter before reading them, iterative-retrieval can help even in a moderate repo.

When should I not use it?

Do not use iterative-retrieval when the task is already well-scoped, the relevant files are known, or a direct prompt with fixed inputs will do. In those cases, the loop adds overhead without improving the answer.

Is this better than a normal prompt?

For discovery tasks, yes. A normal prompt often assumes the model can guess the right context up front. The iterative-retrieval guide is better when the prompt must adapt after reading partial results and the final answer depends on that adaptation.

Is it beginner-friendly?

Yes, if you follow the loop literally. The main learning curve is not syntax; it is choosing a first retrieval that is small enough to be useful and broad enough to surface the right terminology.

How to Improve iterative-retrieval skill

Give the first pass a sharper target

The biggest quality gain comes from better initial framing. Instead of “find relevant code,” ask for a specific behavior, subsystem, or decision point. Include what you already know, what you suspect, and what would count as a useful lead. That makes iterative-retrieval usage much more efficient.

Watch for common failure modes

The usual failure is over-retrieval: the agent pulls too many files and stops learning from the results. The other failure is under-retrieval: too little context to identify the next search term. If the first pass returns generic files, refine by asking for terminology, call sites, or entry points rather than widening the search.

Iterate with evidence, not guesses

After the first output, feed back only the most informative artifacts: file names, function names, error messages, or unfamiliar terms. Avoid asking the agent to “look again” without adding new evidence. For iterative-retrieval for Workflow Automation, the strongest improvement is to encode this evidence loop into your orchestration so each pass changes the search space.

Fit it to your repository rules

If your environment has naming conventions, folder boundaries, or agent handoff rules, bake them into the prompt before the first retrieval. The skill is strongest when it respects your repository’s actual structure instead of treating every codebase like a generic search problem.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...