collecting-open-source-intelligence
by mukul975The collecting-open-source-intelligence skill helps analysts gather and synthesize passive OSINT for Threat Intelligence, including hostile infrastructure, phishing support systems, exposed services, and public indicators. It is designed for structured enrichment, cross-source correlation, and report-ready output using Shodan, crt.sh, WHOIS/RDAP, and GitHub exposure checks.
This skill scores 78/100, which means it is a solid listing candidate for directory users who want a real OSINT collection workflow rather than a generic prompt. The repository gives enough structure, tooling, and trigger guidance to help an agent recognize when to use it and execute with less guesswork, though users should still expect some setup and authorization caveats.
- Clear activation guidance for OSINT, threat actor infrastructure, phishing investigation, and authorized reconnaissance use cases
- Operational depth: SKILL.md plus an API reference and Python agent script outline concrete sources and functions such as Shodan, crt.sh, RDAP WHOIS, SecurityTrails, and GitHub code search
- Good trust signals: valid frontmatter, no placeholder markers, explicit passive-only warning, and repo-backed script/reference files
- No install command in SKILL.md, so users may need to infer setup and dependency steps from the reference files
- The workflow assumes external APIs and tools like Shodan and Maltego, which may limit immediate use for users without keys/licenses
Overview of collecting-open-source-intelligence skill
What this skill does
The collecting-open-source-intelligence skill helps you gather and synthesize passive OSINT for threat intelligence work: hostile infrastructure, phishing support systems, exposed services, and related public indicators. It is best for analysts who need a structured way to enrich an investigation, not for people looking for a generic web search prompt.
Who should install it
Use this collecting-open-source-intelligence skill if you work on CTI, incident response, authorized red team recon, or external attack surface review. It fits readers who want practical collection steps for Shodan, crt.sh, WHOIS/RDAP, GitHub exposure, and similar public sources.
Why it is useful
Its main value is workflow guidance: it pushes you toward passive collection, cross-source correlation, and report-ready output. That makes it more useful than a one-off collecting-open-source-intelligence usage prompt when you need consistent evidence gathering for threat actor or infrastructure analysis.
How to Use collecting-open-source-intelligence skill
Install it and load the right context
Install with npx skills add mukul975/Anthropic-Cybersecurity-Skills --skill collecting-open-source-intelligence. Then read SKILL.md first, followed by references/api-reference.md and scripts/agent.py to understand the intended data flow and required inputs before you ask the model to act.
Turn a vague goal into a usable prompt
For best collecting-open-source-intelligence usage, specify the target, the authorization boundary, and the output you want. Strong input looks like: “Collect passive OSINT on example.com for a threat intelligence brief. Focus on subdomains, certificate data, Shodan exposure, and GitHub references. Return a concise evidence table and key assessment points.” Weak input like “Investigate this domain” leaves too much room for guesswork.
What to provide up front
Give the skill the domain, IP, campaign name, actor alias, or infrastructure set you want analyzed, plus any known indicators. If you already know the intended audience, say whether the output is for triage, CTI enrichment, or executive reporting; that changes how much detail and attribution the skill should emphasize.
Work in a passive-first sequence
The repo’s workflow supports passive collection first, then correlation. Start with certificate transparency, RDAP/WHOIS, Shodan, and GitHub exposure checks, then combine findings into a short assessment. Avoid asking it to “scan” unless your scope explicitly allows active recon, because that changes the legal and operational profile of the task.
collecting-open-source-intelligence skill FAQ
Is this only for threat intelligence?
No. The collecting-open-source-intelligence skill is strongest for Threat Intelligence, but it also helps with authorized pre-engagement recon and external exposure review. If your goal is product marketing, general brand research, or journalism, it is usually the wrong tool.
Do I need tools like Shodan or Maltego installed?
The skill is designed around those ecosystems, but you do not need every tool to use the guidance well. The more important question is whether you can access the data sources the workflow depends on, especially Shodan, GitHub, and certificate transparency logs.
How is this different from a normal prompt?
A normal prompt usually asks for one answer. This collecting-open-source-intelligence guide is better when you need a repeatable collection process, source selection, and output structure. That reduces missed indicators and makes the result easier to reuse in a report.
Is it beginner-friendly?
Yes, if you already understand that OSINT collection must stay passive unless your authorization says otherwise. Beginners benefit most when they start with one domain or one campaign and let the skill structure the sources and summary for them.
How to Improve collecting-open-source-intelligence skill
State the analytical objective clearly
The biggest quality gain comes from naming the decision you are supporting. “Find infrastructure tied to a phishing wave” will produce different output than “enrich a CTI profile for reporting.” The more explicit the objective, the better the collecting-open-source-intelligence skill can prioritize sources and significance.
Include constraints and scope limits
Say what not to do: no active scanning, no dark web follow-up, no speculative attribution, or no collection outside a named domain set. This prevents the model from drifting beyond the safe or useful boundary and keeps the collecting-open-source-intelligence output aligned with your case.
Ask for evidence, not just conclusions
Useful outputs cite where each observation came from and separate confirmed indicators from inferred links. If the first draft is too broad, ask for a tighter evidence table, a source-by-source confidence note, or a shorter “what changed my assessment” summary.
Iterate with better seed data
The fastest way to improve results is to add more concrete starting points: known domains, IPs, certificate subjects, ASN hints, usernames, or repo names. For collecting-open-source-intelligence for Threat Intelligence, even a small seed set can produce a much stronger correlation pass than a generic target name alone.
