S

deep-research

by sanjay3290

deep-research is a GitHub skill for autonomous multi-step research with Google Gemini. It plans, searches, reads, and synthesizes sources into cited reports for market analysis, competitive landscaping, technical research, literature reviews, and due diligence. Use it when you need structured deep-research for Web Research.

Stars0
Favorites0
Comments0
AddedMay 9, 2026
CategoryWeb Research
Install Command
npx skills add sanjay3290/ai-skills --skill deep-research
Curation Score

This skill scores 78/100, which means it is a solid directory candidate: users have enough evidence to judge install value, and agents can trigger it with a defined command flow rather than guesswork. The repo shows a real, non-placeholder research workflow with clear use cases, API requirements, and CLI entry points, though it still leaves some adoption details to the user.

78/100
Strengths
  • Frontmatter and description clearly define the trigger: autonomous multi-step research for market analysis, literature reviews, competitive landscaping, and due diligence.
  • Operationally usable CLI examples are provided for query, stream, no-wait, status, and wait modes, reducing ambiguity for agent execution.
  • The Python script and README indicate a substantive workflow with local history/cache support and cited-report output rather than a demo stub.
Cautions
  • No install command in SKILL.md, so users must infer setup from README and requirements instead of following a single canonical entry point.
  • The skill depends on an external Gemini API key and paid usage, which may limit adoption for users expecting a self-contained skill.
Overview

Overview of deep-research skill

What deep-research does

The deep-research skill runs Google Gemini’s Deep Research workflow for questions that need planning, web reading, and synthesis, not a quick chat response. It is a strong fit when you want a cited report on market analysis, competitive landscaping, technical research, literature review, or due diligence.

Who should install it

Install the deep-research skill if you regularly need multi-source research with a clear answer at the end, especially when you care about traceable sources and structured output. It is less useful for one-off brainstorming, shallow fact lookup, or tasks where you only need a short summary from a single prompt.

Why it is different

The key value of deep-research is the workflow: it can plan the investigation, search iteratively, read sources, and synthesize results into a report. That makes it better than ordinary prompting for topics with many moving parts, competing claims, or source-heavy decisions.

How to Use deep-research skill

Install deep-research

Use the repository skill installer, then install Python dependencies and set your API key before running anything:

npx skills add sanjay3290/ai-skills --skill deep-research
cd skills/deep-research
pip install -r requirements.txt
cp .env.example .env

Add GEMINI_API_KEY to .env or export it in your shell. If the key is missing, the skill cannot start a research task.

Start a research task

The core deep-research usage pattern is a single focused query:

python3 scripts/research.py --query "Research the competitive landscape of cloud providers in 2024"

For better output, turn a vague request into a research brief with scope, timeframe, geography, and deliverable shape. For example, ask for “top 5 vendors, source-backed comparison, risks, and recommendation” instead of just “compare vendors.”

Give the skill better inputs

The deep-research guide works best when your prompt includes:

  • the decision you are trying to make
  • the audience for the report
  • constraints like region, date range, or industry
  • the format you want returned

Example:

python3 scripts/research.py --query "For a CTO choosing a frontend stack in 2025, compare React, Vue, and Angular for hiring availability, ecosystem maturity, and long-term maintenance. Return a concise recommendation with sources."

If you want a very specific structure, use --format to shape the report before generation.

Read these files first

If you are reviewing the repo or adapting the skill, start with SKILL.md, then inspect README.md, requirements.txt, and scripts/research.py. README.md shows the expected workflow, while scripts/research.py reveals supported flags such as --stream, --wait, --status, and --json.

deep-research skill FAQ

Is deep-research the same as a normal prompt?

No. A normal prompt usually asks the model to answer directly. deep-research is for a deeper workflow that searches, reads, and synthesizes across sources, which is why it is better for research tasks with evidence requirements.

When should I not use deep-research?

Do not use deep-research for quick trivia, simple rewriting, or questions where you already know the answer and only need phrasing help. It is also a poor fit if you cannot provide enough context to define the research target.

Is deep-research beginner-friendly?

Yes, if you can state a clear question and accept a slower response. The main beginner mistake is using a broad topic with no scope, which leads to generic output instead of a useful report.

What should I expect from deep-research install?

You should expect a Python-based local setup, a Gemini API key, and a command-line workflow. If you prefer a fully hosted UI or no API configuration, this deep-research skill may feel more operational than you want.

How to Improve deep-research skill

Make the research question decision-shaped

The biggest quality boost comes from turning “research X” into a decision-ready brief. Include what you need to choose, compare, explain, or verify, not just the topic name. Better inputs reduce wandering and improve the final synthesis.

Use constraints to reduce noise

If the first answer feels too broad, narrow the deep-research prompt with one or two concrete constraints: region, audience, company size, time window, or source type. For example, “U.S. B2B SaaS in 2024” is much more actionable than “market analysis.”

Iterate on structure, not just content

If the report is close but not ideal, improve the prompt by changing the output format request, not only the topic wording. Ask for a table, ranked recommendations, risks, or executive summary when those elements affect how you will use the result.

Watch for common failure modes

The most common issue is an underspecified query that produces a broad, lightly differentiated report. The second is asking for too many unrelated subtopics in one run. Split large research projects into narrower passes, then combine the results yourself or in a follow-up prompt.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...