O

multi-search-engine

by openclaw

multi-search-engine is a web research skill with 17 search engines, advanced operators, time filters, privacy-focused options, and WolframAlpha queries. It helps agents build and run better search URLs without API keys.

Stars3.8k
Favorites0
Comments0
AddedApr 5, 2026
CategoryWeb Research
Install Command
npx skills add openclaw/skills --skill multi-search-engine
Curation Score

This skill scores 70/100, which means it is listable for directory users who want a lightweight no-API search URL toolkit, but they should expect to supply some execution judgment themselves. The repository clearly documents 17 search engines, URL patterns, and advanced operator examples, so an agent can trigger it more reliably than from a generic prompt. However, it remains mostly a documentation-and-URL reference skill rather than a fuller workflow with decision rules, constraints, or automation support.

70/100
Strengths
  • Documents 17 concrete engine endpoints in SKILL.md/config.json, making triggering straightforward for agents using web_fetch-style tools.
  • Provides practical examples for basic, site-specific, privacy, and advanced operator searches, plus a reference guide for Google-style deep search syntax.
  • No API keys are required, which lowers adoption friction and makes install intent easy to understand.
Cautions
  • The skill has no scripts, install command, or executable helpers, so users are mainly installing search URL patterns and documentation rather than reusable automation.
  • Operational guidance is thin on engine selection, fallback behavior, rate limits/blocking, and parsing expectations, so agents may still need trial-and-error in real browsing contexts.
Overview

Overview of multi-search-engine skill

What multi-search-engine actually does

The multi-search-engine skill gives an agent ready-made search URL patterns for 17 engines across Chinese and global web research, plus examples for advanced operators and direct web_fetch usage. It is best for people who already have browsing capability and want faster, broader discovery without API keys.

Best fit for Web Research

Use multi-search-engine for Web Research when one engine is not enough: cross-checking regional coverage, finding pages hidden from a default index, running site: and filetype: queries, or switching to privacy-oriented engines like DuckDuckGo, Startpage, and Brave. It also includes WolframAlpha for factual or computational lookups that are not standard web search.

Why users install this instead of prompting manually

The real value is less “searches the web” and more “reduces search formulation guesswork.” The skill consolidates engine endpoints, region choices, and operator examples in one place, so an agent can move from a vague task like “find recent PDF reports from EU regulators” to concrete searches faster. No API keys are required, but you do need a runtime that can open or fetch search result pages.

Key tradeoffs before you install

This multi-search-engine skill is lightweight, not a full search orchestrator. It does not rank sources for you, deduplicate results, or guarantee bypass of bot protection. Some engines may render differently over time, and result quality still depends heavily on query construction. Install it if you want a practical search URL toolkit; skip it if you need a managed search API or automatic crawling pipeline.

How to Use multi-search-engine skill

Install context and files to read first

Install with:
npx skills add openclaw/skills --skill gpyangyoujun/multi-search-engine

Then read SKILL.md first for the engine list and example calls, config.json for the canonical engine definitions, and references/international-search.md for the highest-value operator and time-filter guidance. metadata.json confirms the current scope: 17 engines, no API key requirement.

What input the skill needs

The multi-search-engine skill works best when your prompt includes:

  • the topic or exact entity
  • desired region or language
  • freshness requirement
  • source type, such as news, docs, forums, PDFs, or official sites
  • exclusions, if any

Weak goal: “Research AI policy.”
Strong goal: “Use multi-search-engine to find English and Chinese sources on 2025 AI safety regulation, prioritize official sites and PDFs, include results from Google, Bing INT, Baidu, and DuckDuckGo, and prefer the last 12 months.”

How to turn a rough goal into a usable prompt

Ask the agent to generate and execute multiple query variants, not one generic search. A strong multi-search-engine usage prompt looks like:

“Use the multi-search-engine skill for Web Research. Create 6 search queries for this goal: compare open-source vector databases for on-prem deployment. Include site:github.com, site:docs.*, and filetype:pdf variants, run them across Google, Brave, and DuckDuckGo, and summarize overlaps, unique findings, and missing evidence.”

This works because it specifies engines, query families, source bias, and the expected synthesis.

Practical workflow and quality tips

Start broad, then narrow:

  1. Run 2-3 broad discovery queries on one global and one regional engine.
  2. Extract exact product names, authors, domains, or file formats.
  3. Re-run with operators like site:, filetype:, quotes, exclusions, and time filters.
  4. Cross-check surprising claims on a second engine.

Practical tips:

  • Use Google or Bing INT for broad recall.
  • Use Baidu, Sogou, or WeChat when Chinese platform coverage matters.
  • Use DuckDuckGo, Startpage, or Brave when you want alternate ranking and privacy-oriented results.
  • Use WolframAlpha for calculable questions, not document discovery.

multi-search-engine skill FAQ

Is multi-search-engine better than a normal web search prompt?

Usually yes for structured research. A normal prompt often leaves engine choice and query design implicit. The multi-search-engine skill makes those choices explicit, which improves coverage and repeatability, especially for multilingual research, site-restricted searches, and time-bounded fact finding.

Is this beginner-friendly?

Yes, if you already understand basic search operators or are willing to copy the examples. The skill is simple because it mostly exposes search URL templates and query patterns. Beginners may still need to learn when to use quotes, site:, filetype:, or exclusions to avoid noisy results.

When is this a poor fit?

Do not rely on this multi-search-engine skill if you need guaranteed stable scraping, official API SLAs, or automatic result aggregation. It is also not the right tool for closed databases, login-only content, or tasks where direct source extraction matters more than discovery.

Which engines should I try first?

For general English research: Google, DuckDuckGo, Brave.
For mixed global and China-focused discovery: Bing INT, Baidu, Sogou, WeChat.
For documents and official publications: start with Google plus site: and filetype:pdf.
For computational facts: WolframAlpha.

How to Improve multi-search-engine skill

Give multi-search-engine sharper constraints

Better outputs come from better search framing. Specify geography, date range, content type, and trust preference. “Find startup funding news” is weak. “Use multi-search-engine to find venture funding announcements for robotics startups in Japan since Jan 2025 from company blogs, TechCrunch-like outlets, and official filings” is much stronger.

Use operator-driven query sets, not single searches

The most common failure mode is stopping after one broad query. Instead, ask for a query pack:

  • exact-match query with quotes
  • site: query for known domains
  • filetype:pdf query for reports
  • exclusion query to remove noise
  • time-filtered query for recency

This is where the skill’s reference material adds real value beyond a repo skim.

Handle common quality problems

If results are thin, switch engines before rewriting the whole task. If results are noisy, add quotes, exclusions, and domain restrictions. If the topic is regional, use a region-appropriate engine and language. If the task is analytical rather than document-based, route part of it to WolframAlpha instead of forcing everything through standard search.

Iterate after the first pass

After the first multi-search-engine usage round, ask the agent to list:

  • which engines produced unique sources
  • where results were repetitive
  • what new keywords appeared
  • what evidence is still missing

Then run a second pass using discovered terminology, organization names, and file types. That second iteration is usually where this skill becomes more valuable than a generic browsing prompt.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...