problem-framing-canvas
by deanpetersThe problem-framing-canvas skill guides teams through MITRE’s Problem Framing Canvas to clarify messy requests before solutioning. Use it for decision support when stakeholders disagree, assumptions are unclear, or you need a sharper problem statement and How Might We question.
This skill scores 84/100, which means it is a solid directory listing candidate for users who want a structured problem-framing workflow rather than a generic brainstorming prompt. The repository gives enough operational detail for an agent to trigger the skill correctly and follow a recognizable process, though it is lighter on ecosystem support and install-oriented metadata than a more mature package.
- Strong triggerability: the frontmatter explicitly says to use it when a clearer problem statement is needed before solutions, with concrete scenarios like onboarding drop-off and stakeholder requests.
- Operationally clear workflow: the skill body defines three phases—Look Inward, Look Outward, Reframe—and the included template/sample show the expected outputs and question flow.
- Good install decision value: the long body, valid frontmatter, and example file provide enough substance for users to judge fit without guessing how the skill should be used.
- No install command or supporting metadata files are present, so adoption may require manual setup and the repository gives less packaging guidance than directory users may expect.
- The evidence shows a content-rich methodology, but not scripts or reference sources, so trust depends more on the authored workflow than on external validation or automation.
Overview of problem-framing-canvas skill
What problem-framing-canvas does
The problem-framing-canvas skill guides teams through MITRE’s Problem Framing Canvas so they can define the problem before they rush into solutions. It is most useful when the ask is vague, stakeholders disagree, or you suspect the team is optimizing for the wrong outcome.
Who should use it
Use this problem-framing-canvas skill if you are a PM, designer, researcher, or facilitator who needs a structured way to turn a messy request into a shared problem statement. It is especially helpful for problem-framing-canvas for Decision Support when a team needs evidence for what to do next, not just more ideas.
Why it is different
The skill is interactive and opinionated in a useful way: it pushes three phases, Look Inward, Look Outward, and Reframe. That makes it better than a generic prompt when you need bias checks, stakeholder coverage, and a final “How Might We” framing that can feed workshops, roadmaps, or discovery plans.
How to Use problem-framing-canvas skill
Install and locate the core files
For problem-framing-canvas install, use the repo path shown in the skill directory and start with skills/problem-framing-canvas/SKILL.md. Then read template.md for the output shape and examples/sample.md to see what a strong completed canvas looks like.
Give the skill a real problem, not a topic
The problem-framing-canvas usage works best when you provide a concrete symptom, context, and decision pressure. Strong input sounds like: “Reduce onboarding drop-off for first-time admins in enterprise SaaS, where support tickets show confusion at step 2.” Weak input is just: “Improve onboarding.”
Use the three-phase workflow
A good problem-framing-canvas guide input helps the skill move through:
- Look Inward: what your team assumes, prefers, or may be overlooking
- Look Outward: who feels the problem, when it happens, and who is excluded
- Reframe: a sharper problem statement plus a usable HMW question
Prompt format that tends to work
Ask for the canvas output in one pass and include any constraints that matter, such as audience, business goal, or known evidence. For example: “Use problem-framing-canvas to frame our churn issue for new SMB customers, assuming we only have support logs and product analytics.” That gives the skill enough signal to stay grounded instead of generic.
problem-framing-canvas skill FAQ
Is this better than a normal prompt?
Usually yes, if your problem is ambiguous. A plain prompt may generate ideas too early, while problem-framing-canvas forces problem definition first, which is the main value when teams are stuck debating symptoms.
When should I not use it?
Do not use problem-framing-canvas if the problem is already well defined and you need execution planning, copywriting, or solution ideation. It is a framing tool, not a delivery plan or prioritization framework.
Is it beginner-friendly?
Yes. The canvas structure is simple, but the quality depends on your inputs. Beginners get the most value when they bring one concrete problem, a target user, and at least one known constraint or piece of evidence.
How does it fit with other product workflows?
It fits best before discovery synthesis, roadmap discussion, or solution brainstorming. Use it when you need shared language for the problem, then move into research questions, experiment design, or ideation after the frame is clear.
How to Improve problem-framing-canvas skill
Provide evidence, not just opinions
The biggest quality jump comes from concrete inputs: support themes, funnel drop-off, customer quotes, lost deal reasons, or observed behavior. problem-framing-canvas gets sharper when it can distinguish symptoms from root causes.
Name the boundary conditions
If the team has constraints, say them up front: target segment, timeline, platform, legal limits, or business goal. Those details help the skill avoid broad “solve everything” framing and produce a HMW question you can actually use.
Watch for the common failure mode
The most common miss is a solution-shaped problem statement, such as “We need a better dashboard” or “We need AI automation.” Improve the problem-framing-canvas skill output by restating the user pain and context first, then asking what is truly blocked or misunderstood.
Iterate with one tighter follow-up
After the first canvas, push for a narrower version if the statement is still broad. A useful follow-up is: “Reframe this for first-time users only,” or “Rewrite the problem using only evidence we can verify this quarter.” That refinement usually improves decision quality more than asking for more ideas.
