identify-assumptions-existing
by phurynidentify-assumptions-existing helps you stress-test a feature idea in an existing product by surfacing risky assumptions across Value, Usability, Viability, and Feasibility. It uses PM, designer, and engineer perspectives plus a devil’s advocate lens for Strategic Planning and pre-build risk review.
This skill scores 78/100, which means it is a solid listing candidate for directory users who want structured assumption-risk analysis for an existing product. It has a clear use case, readable trigger language, and a defined workflow that should help agents execute it with less guesswork than a generic prompt, though it still lacks supporting assets and deeper operational examples.
- Clear trigger and scope for stress-testing feature ideas in an existing product
- Concrete workflow across Value, Usability, Viability, and Feasibility with confidence and test suggestions
- No placeholder markers; body content is substantial and specific enough for agent use
- No support files, references, or examples, so users must rely mostly on the SKILL.md text
- No install command or auxiliary resources, which limits onboarding and edge-case guidance
Overview of identify-assumptions-existing skill
identify-assumptions-existing is a product-discovery skill for stress-testing a feature idea in an existing product before you commit to design or build work. It helps you surface the risky assumptions behind a proposal across Value, Usability, Viability, and Feasibility, using a built-in devil’s advocate lens.
This skill is best for product managers, designers, engineers, and founders who need a fast assumption map, not a polished strategy deck. If you are deciding whether a feature is worth pursuing, or trying to find the hidden failure points in an already promising idea, the identify-assumptions-existing skill is a strong fit.
The main value is decision quality: it pushes the conversation from “sounds good” to “what has to be true for this to work?” That makes it especially useful for Strategic Planning, roadmap triage, and pre-research risk review.
What identify-assumptions-existing is for
Use the identify-assumptions-existing skill when you already have a feature idea and need to pressure-test it against real-world constraints. It is designed to reveal where the idea may break in the market, in the user experience, in the business, or in implementation.
Who should install it
Install identify-assumptions-existing if you regularly turn rough product ideas into testable assumptions. It is most useful for teams that want a repeatable way to challenge feature proposals before they become tickets, specs, or experiments.
What makes it different
Unlike a generic brainstorming prompt, identify-assumptions-existing asks the model to think from three roles: PM, designer, and engineer. That cross-functional framing helps you see blind spots faster and produces more actionable tests for each assumption.
How to Use identify-assumptions-existing skill
Install and trigger it
Use the identify-assumptions-existing install flow from the repo command shown in the source:
npx skills add phuryn/pm-skills --skill identify-assumptions-existing
Then invoke the skill with a feature idea for an existing product. The more concrete your input, the better the assumption list will be.
Give the skill the right input
The identify-assumptions-existing usage pattern works best when you include:
- product or feature name
- target user segment
- desired outcome
- the feature idea itself
- any constraints, such as platform, timeline, compliance, or dependencies
A weak prompt is: “Analyze my feature.”
A stronger prompt is: “Stress-test a dashboard export feature for SMB finance admins in our B2B app. Goal: reduce support tickets. Constraints: web only, two engineers, no new data warehouse.”
Read the source in the right order
For an identify-assumptions-existing guide, start with SKILL.md first. If the repository expands later, inspect README.md, AGENTS.md, metadata.json, and any rules/, resources/, references/, or scripts/ folders for extra context. In this repo, SKILL.md is the main source of truth.
Workflow that produces better output
A practical identify-assumptions-existing usage workflow is:
- Describe the product context and feature idea.
- Ask for assumptions grouped by Value, Usability, Viability, and Feasibility.
- Request confidence levels and a test for each assumption.
- Use the output to choose what to validate first.
If you are using it for Strategic Planning, include market segment, business goal, and launch constraints so the skill can separate strategic risk from UX or engineering risk.
identify-assumptions-existing skill FAQ
Is identify-assumptions-existing only for existing products?
Yes, that is the intended fit. The skill is tuned to stress-test a feature idea in an existing product, not to do open-ended concept ideation from scratch.
How is this different from a normal prompt?
A normal prompt may list pros and cons. The identify-assumptions-existing skill pushes deeper by organizing risk across four categories and asking what could go wrong, how confident you are, and how to test it. That makes the output easier to act on.
Is identify-assumptions-existing beginner friendly?
Yes, if you can describe the product, audience, and feature in plain language. You do not need a formal assumption-mapping framework to use it well, but you do need enough context for the model to judge tradeoffs realistically.
When should I not use it?
Do not use identify-assumptions-existing if you need detailed UX copy, implementation code, or a final launch plan. It is a risk-identification skill, so it works best upstream of build decisions.
How to Improve identify-assumptions-existing skill
Provide sharper context
The biggest quality lever for identify-assumptions-existing is specificity about the user and the business goal. If you say only “add AI search,” the skill has to guess too much. If you say “add AI search for support agents to reduce time-to-answer on repeated tickets,” the assumptions become much more useful.
Ask for tests, not just concerns
The source instructs the skill to include what could go wrong and how to test it, so do not stop at risks. Ask for lightweight validation ideas such as interviews, prototype tests, log review, or an internal dogfood check. That turns the output into a planning artifact instead of a critique.
Separate product risk from delivery risk
The most useful identify-assumptions-existing skill outputs usually distinguish between user value, adoption friction, business constraints, and technical feasibility. If your prompt mixes all of that into one vague request, the answer will be less decision-ready. Give constraints explicitly so the skill can rank the most dangerous assumptions first.
Iterate after the first pass
Use the first result to narrow the scope, then rerun the skill with a more focused feature slice. For example, if the first pass shows usability and integration risk, ask again for only the onboarding flow or only the data-sync dependency. That is often the fastest way to sharpen a Strategic Planning discussion.
