requesthunt
by ReScienceLabrequesthunt helps you collect and analyze real user feedback from Reddit, X, and GitHub for demand research and competitive analysis. Set a REQUESTHUNT_API_KEY, run the Python scripts, scrape topics, search requests, and turn pain points, complaints, and feature requests into evidence-backed reports.
This skill scores 78/100, which means it is a solid directory listing candidate for agents that need structured user-demand research from real feedback sources. Repository evidence shows a real workflow with prerequisite setup, runnable Python scripts, and example outputs, so users can make a credible install decision even though installation/runtime assumptions are still somewhat implicit.
- Strong triggerability: the frontmatter clearly says to use it for demand research, feature requests, complaints, and RequestHunt queries across Reddit, X, and GitHub.
- Operationally concrete: SKILL.md defines a step-by-step research workflow and includes runnable commands like `get_usage.py`, `scrape_topic.py`, `search_requests.py`, and `list_requests.py`.
- Good install-decision evidence: the repo includes two substantial examples, including a full conversation and a sample research report showing the intended output quality.
- Setup clarity is incomplete: it requires a `REQUESTHUNT_API_KEY` in `~/.zshrc`, but there is no explicit install command or fuller environment/dependency guidance beyond running `python3` scripts.
- Some workflow details may still require guesswork, since the skill emphasizes collection/reporting flow but provides limited practical guidance for handling failures, platform quirks, or report customization edge cases.
Overview of requesthunt skill
What requesthunt does well
The requesthunt skill helps you turn vague market questions into evidence-backed demand research using real user feedback from Reddit, X, and GitHub. It is best for people doing product planning, feature prioritization, and requesthunt for Competitive Analysis when they need source-grounded pain points instead of opinionated brainstorming.
Who should install requesthunt
This requesthunt skill is a strong fit for founders, PMs, growth researchers, and AI agents that need to answer questions like:
- What complaints keep showing up across competitors?
- Which feature requests have real user pull?
- What pain points are most urgent in a category?
- What should we compare across tools before building?
If you already know your target market but need outside-in evidence, requesthunt is more useful than a generic research prompt.
The real job-to-be-done
Users rarely want “social listening” in the abstract. They want a usable report: recurring requests, representative quotes, platform spread, and concrete signals for roadmap or competitor positioning. requesthunt is built around that workflow: define scope, collect data, inspect requests, and synthesize findings.
What makes requesthunt different from plain prompting
The main differentiator is access to a repeatable collection workflow backed by API-driven scripts, not just an LLM guessing what users might want. The skill includes focused command-line tools for:
- checking API usage
- discovering topics
- triggering realtime scraping
- searching requests with expansion
- listing request records for review
That makes requesthunt usage more auditable than asking a model to “research user pain points” from memory.
Important adoption constraints
Before requesthunt install is useful, you need a REQUESTHUNT_API_KEY and a Python-capable environment. This skill also depends on the quality of your scoping. If your topic is too broad, your output will be noisy. If your topic is too narrow, you may under-sample demand.
How to Use requesthunt skill
Install context and prerequisites
The repository does not expose a one-line package installer inside SKILL.md; instead, the practical setup is environment plus scripts. You need:
- access to the
skills/requesthuntfolder python3- a RequestHunt API key from
https://requesthunt.com/settings/api
Set the key in your shell config:
export REQUESTHUNT_API_KEY="your_api_key"
Then verify the connection:
cd skills/requesthunt
python3 scripts/get_usage.py
If this fails, fix auth first before trying any research workflow.
Files to read first
For a fast requesthunt guide, start here in order:
SKILL.mdexamples/calendar-app-research.mdexamples/scheduling-tools-research-report.mdscripts/get_usage.pyscripts/scrape_topic.pyscripts/search_requests.pyscripts/list_requests.py
Why this order matters: the examples show the expected conversation and report shape, while the scripts tell you what inputs the API actually accepts.
Inputs requesthunt needs from you
The skill works best when you provide five things up front:
- research goal
- target products or competitors
- platform preference
- time recency preference
- report purpose
A weak input is: “research calendar apps.”
A strong input is: “Analyze scheduling and booking tools, especially Cal.com and Calendly, across Reddit, X, and GitHub. Focus on user pain points, feature gaps, and complaints from the last 12 months for competitive analysis.”
How to turn a rough goal into a strong requesthunt prompt
Use a prompt structure like this:
Use requesthunt to research [category].
Focus on [competitors or adjacent products].
Prioritize [pain points / feature requests / complaints / unmet needs].
Use [reddit, x, github].
Bias toward [recent feedback / broad history].
Deliver a report with recurring themes, representative quotes, platform distribution, and implications for roadmap or positioning.
This improves output quality because it constrains the search space and gives the agent a synthesis target, not just a scraping task.
Recommended requesthunt workflow
A practical requesthunt usage pattern is:
- Check API usage
- Define scope tightly
- Trigger a scrape for the main topic
- Search specific sub-problems with expansion
- List requests for inspection
- Cluster themes manually or with the model
- Produce the report with citations or quotes
This sequence reduces the common failure mode where the final report sounds polished but is built on thin data.
Core commands you will actually use
Typical commands from the skill:
python3 scripts/get_usage.py
python3 scripts/get_topics.py
python3 scripts/scrape_topic.py "ai-coding-assistant" --platforms reddit,x,github
python3 scripts/search_requests.py "code completion" --expand --limit 50
python3 scripts/list_requests.py --limit 20
In practice, use a broad topic for scraping, then narrower phrases for search.
Best workflow for Competitive Analysis
For requesthunt for Competitive Analysis, do not search only by competitor name. Combine:
- category term
- competitor names
- job-to-be-done phrases
- pain-point phrases
Example query plan:
scheduling-toolsCalendlyCal.comround robin schedulingreschedulingbuffer timeavailability rules
This captures both branded complaints and unmet needs users describe without naming a vendor.
How to choose topics and search terms
Good topics are market-shaped, not feature-shaped. Start with categories such as:
ai-coding-assistantscheduling-toolsproject-management-tools
Then search supporting phrases users actually complain about, like:
code completion accuracycalendar booking conflictskanban dependencies
The included scripts/get_topics.py can help you see available topics before inventing your own taxonomy.
What the example files tell you
examples/calendar-app-research.md is useful if you want to see the clarification-first conversation flow. examples/scheduling-tools-research-report.md is more important for install decisions because it shows the expected endpoint: a report with prioritized pain points, examples, and actionable synthesis.
If that report format is close to what you need, the skill is likely a fit.
Practical quality tips that change output
Three tips matter more than anything else:
- Ask for a specific report purpose: roadmap, market map, or competitor teardown.
- Separate “topic scrape” from “pain-point search” instead of relying on one query.
- Review raw requests before summarizing; otherwise you may overfit to catchy but low-frequency issues.
Common setup and execution blockers
Most adoption issues are simple:
- missing
REQUESTHUNT_API_KEY - starting with too broad a topic
- skipping platform selection
- assuming scrape output alone is enough for final synthesis
- not checking remaining API quota first
If you expect high-volume iteration, scripts/get_usage.py should be part of your normal preflight.
requesthunt skill FAQ
Is requesthunt better than a normal research prompt?
For source-backed demand research, yes. A normal prompt can help structure thinking, but requesthunt adds a collection layer tied to real feedback sources. That matters when you need evidence, not just plausible hypotheses.
Is the requesthunt skill beginner-friendly?
Moderately. The workflow is simple, but you do need comfort with environment variables and running Python scripts. If command-line setup feels heavy, the skill may still be worth it if you repeatedly do market or product research.
When should I not use requesthunt?
Do not use requesthunt skill when you need:
- first-party analytics
- statistically representative survey research
- deep financial benchmarking
- private customer support data analysis
It is strongest for public demand signals and qualitative pattern discovery.
Does requesthunt only work for product teams?
No. It also fits founders validating ideas, agencies doing market scans, and analysts comparing pain points across categories. But the clearest fit is still product and competitive research.
Can requesthunt replace customer interviews?
No. It is better seen as a fast external signal layer. Use it to identify themes worth validating, not as your only source of truth.
What platforms does requesthunt cover?
Based on the skill materials, it targets Reddit, X, and GitHub. That mix is useful when you want both broad discussion and product-adjacent request threads.
Is requesthunt useful for one-off projects?
Yes, if the decision is meaningful enough to justify setup. For a one-time lightweight brainstorm, a normal prompt may be faster. For anything where bad prioritization is costly, requesthunt install is easier to justify.
How to Improve requesthunt skill
Give requesthunt narrower research frames
The fastest way to improve requesthunt results is to reduce ambiguity. “Research AI tools” is weak. “Compare user complaints about AI coding assistants, especially code completion, context retention, and pricing friction” is much stronger.
Separate discovery from synthesis
Do one pass to collect and inspect, then a second pass to synthesize. Users often compress both into one instruction and get generic summaries. Better sequence:
- collect topic data
- inspect requests
- identify themes
- write conclusions
Use competitor and problem terms together
A common failure mode in requesthunt for Competitive Analysis is over-indexing on brand mentions. Improve recall by pairing vendor names with user-task phrases and frustration phrases.
Ask for evidence thresholds
If you want a more trustworthy report, tell the agent to distinguish:
- repeated themes
- isolated anecdotes
- high-signal quotes
- uncertain findings
That simple instruction sharply improves decision quality.
Review the scripts before extending the workflow
If you want better requesthunt usage, inspect the script arguments rather than guessing from the prose docs. The script files are the best source for supported parameters and expected behavior.
Iterate after the first report
Treat the first report as a map, not the verdict. Then refine:
- add missing competitors
- rerun with tighter subtopics
- switch platform emphasis
- ask for only recent signals
- dig into one high-priority complaint cluster
Improve output formatting for stakeholders
Ask the agent to produce sections that decision-makers can act on:
- top pain points
- evidence table
- representative quotes
- implications for roadmap
- opportunities by competitor weakness
This turns requesthunt guide output into something usable in planning, not just interesting reading.
Watch for false confidence
The main quality risk with requesthunt is not lack of data, but overconfident synthesis from partial data. If the raw evidence looks thin or skewed to one platform, say so explicitly in the prompt and in the final report.
