C

customer-research

by coreyhaines31

customer-research helps agents run structured customer research with two modes: analyze existing assets or find public-source signal. Use it to extract themes, quotes, JTBD, pain points, triggers, and evidence for UX Research, product, and messaging decisions.

Stars17.3K
Favorites0
Comments0
AddedMar 29, 2026
CategoryUX Research
Install Command
npx skills add https://github.com/coreyhaines31/marketingskills --skill customer-research
Curation Score

This skill scores 82/100, which means it is a solid directory listing candidate: agents get strong trigger coverage, a real research workflow, and enough detail for users to judge fit, though execution still depends mostly on written guidance rather than tools or packaged assets.

82/100
Strengths
  • Very strong triggerability: the description names many concrete entry phrases such as ICP research, transcript analysis, VOC, JTBD, review mining, and churn/conversion research.
  • Operationally useful structure: it distinguishes two modes (analyze existing assets vs. gather new research), tells the agent to check product-marketing context first, and evals confirm expected intake and analysis steps.
  • Good practical leverage for online research: the reference guide includes source-specific playbooks like Reddit discovery methods, search operators, and what signal to extract from posts.
Cautions
  • No install command, scripts, or structured templates are provided, so adoption relies on reading and following the markdown workflow manually.
  • Only one reference file is bundled, which limits progressive disclosure for other sources and edge cases beyond the examples shown.
Overview

Overview of customer-research skill

What the customer-research skill does

The customer-research skill helps an agent run structured customer research instead of giving generic brainstorming. It supports two practical jobs: analyzing research you already have, and finding fresh customer signal from public sources when you do not. That makes it useful for UX research, product marketing, positioning, messaging, ICP work, and voice-of-customer synthesis.

Who should install customer-research

This customer-research skill is best for teams that need evidence before making product, UX, or messaging decisions. Good fits include UX researchers, founders, PMs, product marketers, growth teams, and agencies working from interviews, surveys, tickets, reviews, or community posts.

Best-fit use cases

Use customer-research when you need to:

  • analyze interview transcripts or survey responses
  • turn messy customer inputs into themes and quotes
  • find jobs to be done, pain points, triggers, desired outcomes, and buying language
  • do early-stage ICP or market understanding without direct interviews yet
  • mine Reddit, G2, Capterra, forums, and niche communities for recurring patterns

Why this is better than a generic prompt

The main differentiator is workflow discipline. The skill pushes the agent to check for existing product-marketing context first, clarify the research goal before analysis, separate “analyze existing assets” from “go find research,” and extract specific fields rather than dumping a vague summary. The included evals also show what good behavior should look like, which reduces guesswork.

What matters before you adopt it

This is not a data collection tool or scraping package. The value is in research framing, source selection, extraction structure, and synthesis quality. If you want automated pipelines, dashboards, or statistical survey tooling, this customer-research guide is not the right fit on its own. If you want better research prompts and more consistent outputs from an AI agent, it is a strong candidate.

How to Use customer-research skill

Install context for customer-research

Install from the repository with:

npx skills add https://github.com/coreyhaines31/marketingskills --skill customer-research

Then open the skill folder and read:

  • skills/customer-research/SKILL.md
  • skills/customer-research/evals/evals.json
  • skills/customer-research/references/source-guides.md

If you only read one file first, start with SKILL.md. If you want to understand output quality fast, read evals/evals.json next.

Start by checking product context

A practical requirement in the skill is to check for .agents/product-marketing-context.md or .claude/product-marketing-context.md before asking questions. This matters because the quality of customer-research usage improves a lot when the agent already knows the product, buyer, category, and constraints.

If that file exists, point the agent to it or paste the equivalent context into your prompt.

Know the two operating modes

The customer-research skill works best when you explicitly choose one mode:

  1. Analyze existing assets
    Use this for transcripts, tickets, surveys, reviews, call notes, and support logs.

  2. Go find research
    Use this when you lack direct customer material and need public-source research from Reddit, review sites, forums, or competitor-adjacent discussions.

Stating the mode upfront prevents the agent from mixing synthesis with source discovery.

What inputs produce good output

For strong customer-research usage, give the agent:

  • the product or problem category
  • target user or ICP
  • research goal
  • available sources
  • desired deliverable
  • constraints such as time, geography, market, or segment

A weak prompt is:

  • “Help me research customers.”

A stronger prompt is:

  • “Use the customer-research skill in analyze-existing-assets mode. I have 18 interview transcripts from heads of support at B2B SaaS companies. Goal: identify recurring onboarding pain points, switching triggers, and language we can use on our website. Deliverable: prioritized themes, representative quotes, and a short implications section for UX Research and messaging.”

Turn a rough goal into a complete prompt

A reliable prompt pattern for this customer-research skill is:

  • context: what product or audience is involved
  • objective: what decision the research should inform
  • mode: analyze assets or find research
  • source list: what data exists
  • extraction fields: what to pull out
  • output format: what final artifact you want

Example:

“Use the customer-research skill for UX Research. Product: user onboarding software for mid-market SaaS teams. Audience: onboarding managers and customer success leaders. Mode: analyze existing assets. Sources: 12 transcripts, 40 churn survey responses, 85 support tickets. Extract: jobs to be done, pain points, trigger events, desired outcomes, alternatives considered, exact language, and high-signal quotes. Output: clustered themes with frequency and intensity, then 5 UX opportunities.”

What the skill tends to extract well

Based on the skill and evals, the agent should look for:

  • jobs to be done
  • pain points and friction
  • trigger events
  • desired outcomes
  • language customers naturally use
  • alternatives or competitors considered
  • theme clusters
  • frequency and intensity
  • money quotes for evidence

That structure is useful because it bridges raw research and downstream decisions like UX changes, positioning, and messaging.

How to use customer-research for UX Research

For UX Research, avoid asking only for “insights.” Ask for:

  • recurring task breakdowns
  • friction points in current workflows
  • moments of confusion or delay
  • unmet expectations
  • workaround behaviors
  • feature-selection criteria
  • evidence-backed opportunity areas

This keeps the customer-research skill grounded in user behavior rather than drifting into marketing-only summaries.

How to use public-source research well

In “go find research” mode, the repository’s reference guide points to practical sources like Reddit, G2, Capterra, forums, and niche communities. The strongest approach is not “search our brand name,” but “search where the ICP discusses the problem.”

Useful source types include:

  • problem-focused Reddit threads
  • competitor comparison posts
  • review-site complaints and praise
  • forum posts about workflows and workarounds
  • “what tool do you use for X?” discussions

Repository files worth reading before first real use

Read these in order:

  1. SKILL.md for the workflow
  2. evals/evals.json for expected prompting behavior and output shape
  3. references/source-guides.md for source-specific tactics, especially Reddit research

The evals are especially useful because they reveal non-obvious expectations, like asking for the user’s goal before analysis and suggesting frequency plus intensity scoring.

Suggested workflow in practice

A good customer-research guide for real use looks like this:

  1. provide product and audience context
  2. state the research decision you need to make
  3. choose mode 1 or mode 2
  4. give the available materials or target sources
  5. ask the agent to extract structured fields
  6. review the first synthesis for missing segments
  7. run a second pass focused on contradictions, edge cases, and best quotes
  8. turn the findings into a specific UX, product, or messaging artifact

Practical output formats to ask for

Choose a deliverable that matches your next step:

  • theme table with quotes
  • JTBD summary
  • persona inputs grounded in evidence
  • comparison of segments
  • top pain points ranked by frequency and intensity
  • source map of communities and review sites
  • implication memo for product or UX Research

Specific deliverables improve the customer-research install experience because they make the first run more actionable.

customer-research skill FAQ

Is customer-research good for beginners

Yes, if you already know what decision the research should support. The skill gives more structure than a normal prompt, but beginners still need to supply product context and choose a useful deliverable. Without that, the output can stay broad.

When should I use customer-research instead of a normal prompt

Use the customer-research skill when you want repeatable extraction and synthesis, especially across many assets. A generic prompt may summarize content, but this skill is more likely to ask about goals, use a research frame, and produce clusters, quotes, and evidence instead of loose observations.

Is this only for marketing teams

No. Despite living in a marketing skills repository, customer-research is also useful for UX Research, product discovery, support analysis, and early market understanding. The underlying methods map well to any team that needs to understand user pain, triggers, desired outcomes, and vocabulary.

What are the boundaries of the customer-research skill

It does not replace primary research operations, participant recruitment, analytics instrumentation, or formal quant methods. It is strongest at framing, source discovery, qualitative analysis, and synthesis.

Can customer-research work without interview transcripts

Yes. That is one of its better adoption points. The skill explicitly supports a mode for finding research from online sources, which is helpful for early-stage teams or teams entering a new segment without direct access to customers yet.

When is customer-research a poor fit

Skip it when:

  • you need statistically valid survey analysis
  • you need legal or compliance review of research methods
  • you need automated scraping or data pipelines
  • you only want ad hoc copy ideas without evidence gathering

Does the repository include source-specific guidance

Yes. references/source-guides.md includes concrete public-source research guidance, especially around Reddit discovery, search patterns, and what kinds of posts reveal pain, alternatives, and switching triggers.

How to Improve customer-research skill

Give the skill a decision, not just a topic

The biggest quality upgrade is telling the agent what decision the research should inform. “Research our customers” is weak. “Find onboarding friction we should prioritize in the next UX sprint” is much stronger. Better decisions lead to better extraction and synthesis.

Provide stronger source framing

Tell the agent what each source represents:

  • win interviews
  • churn interviews
  • support tickets from new users
  • G2 reviews from SMB buyers
  • Reddit posts from practitioners, not buyers

This improves clustering because the agent can separate acquisition, onboarding, retention, and switching signals.

Ask for evidence-backed segmentation

A common failure mode in customer-research usage is collapsing different users into one blended persona. Improve results by asking for:

  • segment differences
  • contradictions between sources
  • minority but high-severity pain points
  • differences between buyers, admins, and daily users

Require frequency and intensity, not just themes

Themes alone are often too soft. Ask the agent to score or at least distinguish:

  • common but low-severity issues
  • less common but high-intensity issues
  • one-off anecdotes that should not drive decisions

This is one of the clearest practical patterns surfaced by the evals.

Ask for exact language and money quotes

If you want outputs that are useful beyond a research memo, request verbatim phrases and short supporting quotes. This makes the customer-research skill more valuable for UX Research synthesis, stakeholder readouts, and later messaging work.

Improve public-source research with better search seeds

For mode 2, do not start only with your product category. Seed searches with:

  • problem statements
  • job titles
  • competitor names
  • “alternative,” “switch,” “recommend,” and “frustrated with” language

The repository reference guide shows why this works: problem-led searches surface real workflow pain faster than brand-led searches.

Iterate after the first pass

The first output should usually be a map, not the final answer. Then ask follow-ups like:

  • “Which themes are strongest for new users versus experienced users?”
  • “What contradictions appear between interviews and reviews?”
  • “Pull 10 quotes that show urgency, not just dissatisfaction.”
  • “Which findings are most actionable for UX Research in the onboarding flow?”

Watch for common failure modes

The customer-research skill will underperform when:

  • source quality is mixed but unlabeled
  • the prompt asks for too many outcomes at once
  • the audience segment is vague
  • public sources are used as if they were direct customer interviews
  • synthesis is requested before enough raw evidence is supplied

Build a reusable prompt template

If you expect repeated customer-research usage, create a standard template with:

  • product summary
  • ICP and non-ICP
  • research question
  • mode
  • source inventory
  • extraction fields
  • output format
  • constraints
  • what decision the result should support

That turns the skill from a one-off prompt helper into a repeatable research workflow.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...