deep-research
by affaan-mThe deep-research skill turns broad questions into sourced web research with firecrawl and exa MCP tools. Use it to compare sources, synthesize findings, and produce cited reports for competitive analysis, technology evaluation, due diligence, and other decisions that need evidence.
This skill scores 84/100, which means it is a solid listing candidate for directory users: it has a clear research use case, a concrete workflow, and enough operational detail to help an agent execute with less guesswork than a generic prompt. Users should still expect some setup dependence on external MCP tools and a few missing adoption details, but the skill is strong enough to install if they need repeatable deep-research behavior.
- Explicit trigger guidance covers in-depth research, due diligence, competitive analysis, and 'research/deep dive/investigate' prompts.
- Operational workflow is spelled out with clarifying questions, sub-question planning, and synthesis/reporting steps.
- Tool requirements are concrete: it names firecrawl and exa MCP actions and explains that either one is sufficient, which helps agents know how to trigger it.
- Requires external MCP configuration in ~/.claude.json or ~/.codex/config.toml, so it is not plug-and-play.
- No install command, scripts, references, or support files are provided, so adoption depends on reading the SKILL.md closely.
Overview of deep-research skill
What deep-research does
The deep-research skill turns a broad question into a sourced web research workflow. It uses firecrawl and exa MCP tools to search, compare, and synthesize multiple sources into a cited report. This deep-research skill is best for questions where a single prompt answer is too shallow or too risky to trust.
When it is the right fit
Use deep-research for competitive analysis, technology evaluation, market sizing, due diligence, current-state summaries, and any decision that depends on evidence from several sources. It is a good fit when you need the deep-research for Web Research pattern: gather, cross-check, then write a usable summary with attribution.
What matters before you install
The main adoption blocker is not complexity, but tool access. The deep-research install only pays off if your environment can call at least one supported MCP: firecrawl or exa. If you want stronger coverage and fewer blind spots, configure both before relying on the skill.
How to Use deep-research skill
Install and wire up the tools
Install with npx skills add affaan-m/everything-claude-code --skill deep-research. Then confirm your MCP setup in ~/.claude.json or ~/.codex/config.toml. The skill is most useful when the model can actually search and scrape the web, so verify tool names and credentials before starting a long research task.
Start with the right input
For good deep-research usage, do not ask only “research this.” Give a topic, the intended outcome, and any constraints. Better prompts look like: “Research the current state of AI coding agents for a product decision, compare leading tools, and cite recent sources.” That gives the skill enough shape to choose sub-questions and source types.
Read the files that control behavior
Start with skills/deep-research/SKILL.md, then inspect any linked repo context if present. In this repository, the skill body is the main source of behavior guidance, so the key job is to understand the workflow, activation rules, and MCP requirements rather than hunting for extra support files that are not there.
Use a workflow that improves the output
Ask the model to clarify scope if the topic is broad, then let it split the work into 3–5 research sub-questions. If you already know the angle, say so up front: “focus on pricing, adoption, and risks” or “exclude vendor marketing pages.” This helps the deep-research guide produce a tighter report and reduces irrelevant source noise.
deep-research skill FAQ
Is deep-research better than a normal prompt?
Yes, when the task needs sourced synthesis from multiple pages. A normal prompt can summarize known facts, but the deep-research skill is designed to search the web, compare evidence, and return citations. If you do not need current information or source attribution, a plain prompt may be enough.
Do I need both firecrawl and exa?
No. The skill can run with either one. But for deep-research for Web Research, both tools usually improve coverage because they complement each other: one may find and scrape pages the other misses, which matters for broad or fast-changing topics.
Is it beginner-friendly?
Yes, if you can describe your goal clearly. The skill asks for only lightweight clarification at the start, and it can proceed with “just research it” when needed. The main beginner mistake is giving a vague topic without any decision context, which makes the research too broad.
When should I not use it?
Do not use deep-research for tasks that need a quick factual answer, no web access, or no citations. It is also a poor fit when you already have the exact sources and only need rewriting. In those cases, the overhead of the deep-research install and workflow is unnecessary.
How to Improve deep-research skill
Give it a decision frame
The biggest quality jump comes from telling the skill why you need the research. “Learning,” “choosing a vendor,” and “writing a memo” lead to different source selection and synthesis. If you want better output, state the audience, time horizon, and what counts as a useful conclusion.
Add constraints that reduce noise
Useful constraints include date range, geography, competitor set, excluded sources, and preferred evidence types. For example: “Use sources from the last 18 months, emphasize primary documentation, and avoid vendor blogs unless they add unique data.” This improves the deep-research guide’s signal-to-noise ratio.
Watch for the common failure modes
The most common failure modes are too many sub-questions, overreliance on marketing pages, and reports that list facts without answering the real question. If the first pass is broad, ask for a narrower synthesis: “focus only on risks” or “turn this into a buyer recommendation.” That iteration usually helps more than asking for “more detail.”
Iterate from the first draft
After the initial report, request a second pass that tightens one dimension: evidence quality, comparison depth, or decision summary. Good follow-up prompts include: “separate confirmed facts from inference,” “rank the strongest sources,” or “turn this into a 1-page executive brief.” That is the fastest way to make deep-research output more actionable.
