research
by MarsWang42Structured deep-research workflow for complex topics. Learn how the research skill works, what it needs, and how to use its planning and execution flow effectively.
This skill scores 72/100, which means it is acceptable for listing and should help agents do structured deep research with less guesswork than a generic prompt, but directory users should expect a document-driven workflow rather than a fully operational package with supporting files or install guidance.
- Defines a concrete two-phase workflow: planning agent first, then user review, then execution agent with fresh context.
- Provides explicit orchestrator instructions and expected inputs, making the trigger case easy to recognize for deep topic research.
- Includes practical output structure such as creating a plan file and passing only the plan path to the execution phase, which gives agents reusable coordination scaffolding.
- All value is in one SKILL.md file with no supporting scripts, references, or examples, so adoption depends on interpreting prose correctly.
- The workflow references environment-specific locations and agent/task behavior, but the excerpt shows no install command or repository-linked artifacts to verify those assumptions.
Overview of research skill
What the research skill does
The research skill is a structured deep-research workflow for understanding a technology, concept, or complex topic without collapsing planning and execution into one vague prompt. Instead of asking one agent to both decide how to research and do the research at the same time, this skill splits the job into a planning phase and an execution phase. That design is the main reason to install it.
Who should use this research skill
This research skill is best for users who need a repeatable way to investigate topics like software architecture, protocols, academic concepts, or unfamiliar systems. It is especially useful when you care about scope control, question framing, and review before full research begins. For research for Academic Research, technical due diligence, and concept mapping, that extra planning step is often more valuable than a generic “tell me about X” prompt.
What job it helps you get done
The real job-to-be-done is not “generate a summary.” It is: define the topic, identify the right context, create a research strategy, pause for user approval, then execute with fresh context and clearer boundaries. That reduces drift, shallow coverage, and wasted tokens on the wrong angle.
Key adoption considerations
This skill is lightweight in repo structure: the useful logic is almost entirely in SKILL.md. There are no helper scripts, reference files, or install metadata to rely on, so success depends on whether your agent runtime supports the intended multi-agent flow with a planning agent, an orchestrator, and an execution agent. If you want a one-shot answer, this research skill may feel slower than necessary.
How to Use research skill
Install context and where to read first
For this research install decision, start with EN/.agents/skills/research/SKILL.md. That file contains the actual workflow, inputs, and orchestration behavior. The repository evidence does not show a dedicated install command inside the skill itself, so use the skill-loading method supported by your agent platform, then verify that the runtime can:
- invoke
/research - spawn a planning agent
- pause for confirmation
- spawn an execution agent with the plan file path
If your environment cannot pass work cleanly between agents, the core value of the research skill drops.
What input the research skill needs
At minimum, provide a topic. Better results come from adding:
- the exact decision you need to make
- the depth level you want
- constraints such as time, audience, or prior knowledge
- project context or domain
Weak input:
/research OAuth2
Stronger input:
/research Research OAuth2 for a backend team migrating from session auth. Focus on grant types still relevant in 2025, common implementation mistakes, security tradeoffs, and what to recommend for internal APIs vs third-party integrations.
For research for Academic Research, include the research question, discipline, expected rigor, and output form:
/research Investigate retrieval-augmented generation evaluation methods for academic literature review. Compare offline metrics, human evaluation, and benchmark design. I need a structured brief with terminology, core debates, and a shortlist of methods worth deeper reading.
Practical research usage workflow
A good research usage pattern is:
- Invoke
/researchwith a scoped topic and desired outcome. - Let the planning agent identify context and create the plan file.
- Review the plan before execution. This is where you catch the wrong audience, missing questions, or overbroad scope.
- Confirm execution only after the plan matches your intent.
- Use the final notes as a first-pass map, then run a narrower follow-up research cycle on unclear sections.
This review gate is the biggest practical differentiator from ordinary prompting. If the plan is weak, execution will usually be weak too.
How to write prompts that invoke it well
Use a prompt shape that makes planning easy:
- Topic: what is being researched
- Goal: what decision or understanding is needed
- Scope: what to include and exclude
- Audience: beginner, practitioner, researcher, leadership
- Output: comparison, briefing, notes, recommendations
Example:
/research Topic: consistent hashing. Goal: explain it well enough to choose whether to use it in a distributed cache design. Scope: core mechanism, failure cases, virtual nodes, operational tradeoffs; exclude heavy math proofs. Audience: senior engineers. Output: decision-oriented research notes.
research skill FAQ
Is this better than a normal prompt for research?
Usually yes when the topic is broad, ambiguous, or decision-heavy. A normal prompt often mixes planning, assumptions, and answer generation in one pass. The research skill forces an explicit plan first, which improves scope and makes the final output easier to trust.
When should I not use the research skill?
Skip it for quick facts, simple definitions, or tasks where you already know the exact subquestion. If you do not need a review step, the two-phase flow may be overhead. It is also a weaker fit if your agent system cannot reliably coordinate subagents.
Is it suitable for beginners?
Yes, but only if beginners can describe their goal, not just the topic. “Teach me Kubernetes” is too broad. “Help me understand Kubernetes enough to deploy one internal service and avoid common architecture mistakes” is much better. The skill helps, but it does not replace good scoping.
Does it fit Academic Research workflows?
It can support research for Academic Research at the question-framing and synthesis stage, especially for mapping terminology, debates, and subtopics. But do not treat it as a substitute for formal literature review method, source verification, citation management, or domain-specific evidence standards unless your broader system adds those steps.
How to Improve research skill
Improve the plan before you approve execution
The highest-leverage improvement is to critique the plan, not just the final notes. Check whether the plan:
- answers the real decision you care about
- separates background from actionable questions
- avoids being too broad
- reflects your audience and constraints
If the plan is generic, ask for narrower angles before execution.
Give stronger inputs for better research outputs
The research skill performs better when you add decision context. Useful details include:
- what you already know
- what confuses you
- what outcome you need next
- what “good enough” means
For example, “compare approaches” is weaker than “compare approaches for maintainability, migration risk, and operational complexity in a small team.”
Watch for common failure modes
Common issues are overbroad topics, unclear audience, and “survey everything” requests. Another failure mode is assuming the skill will infer your project context correctly. If the topic relates to an active codebase, architecture, or course of study, say so explicitly. The skill’s structure helps, but it cannot recover from missing intent.
Iterate after the first pass
Treat the first run as map-making. Then launch a second research guide cycle on the parts that matter most: a disputed tradeoff, a hard concept, or a decision branch. Narrow, sequential research runs usually produce better output than one giant request. That is the best way to turn this research skill into a dependable workflow rather than a one-off prompt.
