F

firecrawl-search

by firecrawl

firecrawl-search is a web research skill for finding sources, running structured search, and optionally scraping full page content as JSON with Firecrawl CLI.

Stars234
Favorites0
Comments0
AddedMar 31, 2026
CategoryWeb Research
Install Command
npx skills add firecrawl/cli --skill firecrawl-search
Curation Score

This skill scores 78/100, which means it is a solid directory listing candidate: agents get clear trigger cues, concrete CLI examples, and a believable workflow advantage over a generic prompt for web research. Directory users can reasonably decide to install it if they want Firecrawl-backed search with optional full-page extraction, though they should expect some operational details to remain implicit.

78/100
Strengths
  • Strong triggerability: the description explicitly maps many common user intents like "search for," "find me," "look up," and research/news requests.
  • Good operational leverage: the skill shows concrete commands for basic search, search plus scraping, and recent news, with JSON output paths and key flags.
  • Credible workflow fit: it explains where search sits in a broader escalation pattern (search → scrape → map → crawl → interact), helping agents choose it as a first step.
Cautions
  • Adoption clarity is limited by sparse packaging/support files: there is no install command in SKILL.md and no companion scripts, references, or metadata.
  • Option guidance appears partially documented, but constraints and decision rules are thin, so agents may still need some guesswork for edge cases and parameter selection.
Overview

Overview of firecrawl-search skill

What firecrawl-search does

firecrawl-search is a web research skill for finding pages first, then optionally extracting the full content of those pages in the same step. It is best for users who need more than a search snippet: source discovery, article gathering, recent news checks, and evidence collection for later summarization or comparison.

Who should install firecrawl-search skill

The best fit is anyone doing AI-assisted web research who does not already have a target URL. If your job starts with “find sources about X,” “search recent coverage,” or “look up what people are saying,” this skill is more direct than a generic prompt because it turns that request into a repeatable CLI workflow with structured JSON output.

The real job-to-be-done

Most users installing firecrawl-search want three things:

  1. find relevant pages quickly,
  2. optionally pull full page markdown instead of snippets,
  3. hand clean results to an agent for synthesis, filtering, or follow-up scraping.

That makes firecrawl-search for Web Research especially useful as the first step in a broader search → scrape → map → crawl workflow.

Why users choose firecrawl-search over ordinary prompting

The main differentiator is that firecrawl-search returns real search results in machine-friendly JSON and can add full-page extraction with --scrape. Compared with asking a model to “search the web,” this gives you:

  • explicit query control,
  • source-type control such as web or news,
  • result limits,
  • easier downstream parsing,
  • a clearer boundary between search and analysis.

What matters before you install

The skill is lightweight in repo structure, but the important decision is not documentation depth; it is whether the workflow matches your task. Install firecrawl-search skill if you need discovery plus optional content capture. Do not treat it as a full site crawler, browser automation tool, or final-answer engine by itself.

Best-fit and misfit cases

Use firecrawl-search when:

  • you need sources on a topic but do not know the URLs yet,
  • you need recent news or multiple viewpoints,
  • you want search results saved to files for later processing.

Skip it when:

  • you already know the exact page to scrape,
  • you need deep traversal across a site,
  • you need rich interaction with forms or dynamic web apps.

How to Use firecrawl-search skill

The repository excerpt shows the skill expects CLI access through:

  • firecrawl *
  • npx firecrawl *

A practical install path for firecrawl-search install on a skills-enabled setup is:

npx skills add https://github.com/firecrawl/cli --skill firecrawl-search

Then confirm your environment can run either firecrawl or npx firecrawl commands.

Read this file first

For this specific skill, start with:

  • skills/firecrawl-search/SKILL.md

There are no meaningful support folders surfaced here, so most adoption decisions come from that one file. Read it to confirm the intended trigger phrases, command patterns, and search options.

The core firecrawl-search commands

The upstream skill centers on three patterns:

firecrawl search "your query" -o .firecrawl/result.json --json
firecrawl search "your query" --scrape -o .firecrawl/scraped.json --json
firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json --json

These cover the main usage modes:

  • basic search,
  • search plus full-page extraction,
  • news search with recency filtering.

What input firecrawl-search needs

Good firecrawl-search usage starts with a query that is explicit about:

  • topic,
  • time frame,
  • source type,
  • intent.

Weak input: AI regulation

Stronger input: EU AI Act enforcement guidance 2025 official commentary

The stronger query improves relevance because the search stage is literal. If your request is broad, the output will be broad.

How to turn a rough goal into a strong prompt

If the user says, “Find what companies are saying about open-source AI security,” convert that into an invocation plan:

  • define the target angle: vendor statements, blog posts, reports, interviews,
  • define recency: last 30 days or last year,
  • define sources: web or news,
  • decide whether full-page extraction is needed immediately.

A stronger agent prompt for firecrawl-search looks like:

Use firecrawl-search to find recent web and news sources about open-source AI security from the last 30 days. Return 10 results in JSON, then scrape the top 5 pages with substantive content for comparison.

That prompt is better because it specifies search surface, time horizon, output shape, and follow-up action.

When to use --scrape immediately

Use --scrape when snippets are not enough and you know you will need the page body for:

  • summarization,
  • quote extraction,
  • policy comparison,
  • content clustering.

Avoid --scrape on the first pass when you are still exploring a noisy topic. Search-only is faster for query tuning; scrape after you confirm the right result set.

Choosing source types and recency well

The visible options include:

  • --sources <web,images,news>
  • --limit <n>
  • --tbs ...

For most research tasks:

  • use --sources news when timeliness matters,
  • use --sources web when you want broader source discovery,
  • keep --limit modest at first to reduce noise,
  • use --tbs when the request implies recent coverage.

A common quality mistake is searching news-like queries without a recency filter, then mixing stale and current reporting.

Suggested workflow for web research

A practical firecrawl-search guide is:

  1. Start with a narrow search query.
  2. Save JSON output to .firecrawl/....
  3. Review titles and URLs for relevance.
  4. Refine the query if results are off-target.
  5. Re-run with --scrape only after the result set is good.
  6. Summarize or compare the scraped content in a second step.

This staged workflow is usually better than asking for broad search and full extraction in one vague request.

Output handling and file habits

The examples save results to .firecrawl/result.json style paths. Keep doing that. It makes the skill more useful because:

  • you can inspect raw search output,
  • agents can reuse the file in later steps,
  • you can separate discovery from synthesis,
  • failures are easier to debug than ephemeral chat-only output.

Practical tips that change output quality

A few high-impact habits improve firecrawl-search usage materially:

  • Put named entities in the query: company names, law names, product names.
  • Add intent words like official, comparison, case study, or announcement.
  • Split exploratory and extraction runs.
  • Ask for a result count deliberately instead of taking a very large set by default.
  • Use news-specific queries only with recency constraints.

Boundaries to understand before relying on it

The skill description explicitly positions firecrawl-search as stronger than built-in web search for structured output and optional content extraction, but it still has limits:

  • it depends on query quality,
  • broad searches can return noisy results,
  • full-page scraping is useful but not the same as deep site crawling,
  • it is a research acquisition step, not validation by itself.

firecrawl-search skill FAQ

Is firecrawl-search better than a normal “search the web” prompt?

For repeatable research workflows, yes. firecrawl-search is better when you need explicit commands, JSON output, saved files, and optional page extraction. A generic prompt may be fine for one-off curiosity, but it is weaker for traceable, multi-step research.

Is firecrawl-search skill beginner-friendly?

Yes, if you are comfortable running a CLI command and reading JSON output. The command surface shown in the skill is small. The main beginner challenge is query design, not installation complexity.

When should I use firecrawl-search instead of scraping a URL directly?

Use firecrawl-search skill when discovery comes first. If you already know the exact page you want, direct scraping is usually the cleaner path.

Can firecrawl-search handle recent news research?

Yes. The skill explicitly shows --sources news and a --tbs qdr:d pattern for recent results. That makes it suitable for time-sensitive checks, provided you define the time horizon clearly.

Is firecrawl-search enough for full web research pipelines?

Usually it is the first step, not the whole pipeline. The skill itself points to a workflow escalation pattern: search → scrape → map → crawl → interact. Install it if discovery is your bottleneck; add other skills if traversal or interaction is the bottleneck.

When is firecrawl-search a poor fit?

It is a poor fit when:

  • you need website automation,
  • you need authenticated browsing,
  • you need exhaustive domain crawling,
  • you already have the target URLs.

How to Improve firecrawl-search skill

Improve firecrawl-search results by tightening the query

The biggest lever is query specificity. If first-pass results are weak, do not just increase the limit. Rewrite the query with:

  • a clear subject,
  • a source angle,
  • a date signal,
  • a geography or domain constraint if relevant.

Better query rewrites usually beat larger result sets.

Use two-pass research instead of one-pass overload

A common failure mode is asking firecrawl-search to do too much at once. Better pattern:

  • pass 1: search only to identify high-value URLs,
  • pass 2: scrape selected results for full text.

This reduces irrelevant scraping and improves downstream summaries.

Ask for the output shape you actually need

If your next step is analysis, request structured handling explicitly:

  • save raw JSON,
  • identify top results,
  • scrape only the finalists,
  • summarize after extraction.

This is more reliable than asking an agent to “research everything” in one shot.

Reduce noise with source and time constraints

When results feel messy, add constraints before adding volume:

  • switch to --sources news for current events,
  • use --tbs for recency,
  • lower or cap --limit,
  • narrow the topic wording.

This is often the fastest way to improve firecrawl-search for Web Research.

Watch for common failure modes

Typical issues with firecrawl-search are:

  • overly broad queries,
  • scraping too early,
  • mixing evergreen and time-sensitive intents,
  • treating search results as final evidence without reading the extracted pages.

If quality drops, check those assumptions first.

Give the agent stronger instructions

A better invocation prompt usually includes:

  • the research question,
  • what counts as a good source,
  • desired source type,
  • recency needs,
  • how many results to collect,
  • whether to scrape the result pages.

Example:

Use firecrawl-search to find 8 recent news and web sources on open-source AI model security benchmarks from the past 14 days. Save JSON results, then scrape the top 4 substantive sources for detailed comparison.

That instruction improves result quality because it removes guesswork.

Iterate after the first output

Do not judge the firecrawl-search skill from one broad run. Review the first result set and then refine:

  • add missing entities,
  • remove ambiguous terms,
  • split one query into two narrower searches,
  • rerun scraping only on clearly relevant pages.

The skill works best when treated as an iterative research tool rather than a single-shot answer generator.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...