O

security-threat-model

by openai

Repository-grounded security-threat-model skill for AppSec threat modeling. It maps trust boundaries, assets, attacker goals, abuse paths, and mitigations into a concise Markdown threat model. Use it when you need security-threat-model for Threat Modeling on a specific repo or path, not a generic architecture review or code check.

Stars0
Favorites0
Comments0
AddedMay 8, 2026
CategoryThreat Modeling
Install Command
npx skills add openai/skills --skill security-threat-model
Curation Score

This skill scores 88/100, which means it is a solid listing candidate for directory users who want repo-grounded AppSec threat modeling. The repository gives a clear trigger, a concrete output contract, and enough workflow guidance to reduce guesswork versus a generic prompt, though users should still expect some manual assembly from repo evidence and prompts.

88/100
Strengths
  • Explicit trigger rules narrow when the skill should be used, reducing misuse for non-security tasks.
  • Strong workflow guidance for repo-grounded threat modeling, including scope extraction, trust boundaries, attacker goals, and prioritized abuse paths.
  • Useful references/prompt templates and a default agent prompt provide practical leverage and a clearer execution path.
Cautions
  • No install command in SKILL.md, so adoption may require manual setup or extra wiring.
  • The repo depends on external repository summaries and prompt-template usage, so execution can still require user-provided context and some synthesis.
Overview

Overview of security-threat-model skill

What security-threat-model does

The security-threat-model skill turns a repository or path into a grounded AppSec threat model, focused on trust boundaries, assets, attacker goals, abuse paths, and mitigations. It is the right security-threat-model for Threat Modeling when you need a decision-ready security view of a real codebase, not a generic checklist.

Who should use it

Use this security-threat-model skill if you are shipping software, reviewing a risky feature, or preparing an AppSec review and need a concise Markdown output you can share with engineers. It fits best when you already know the repo, service, or subsystem in scope and want the model tied to actual implementation details.

When it is a good fit

This skill is strongest when the user asks to threat model a specific repository, folder, service, CLI, or workflow. It is designed to surface realistic attacker paths, rank impact, and make assumptions explicit so you can separate confirmed architecture from inferred behavior.

Where it is not the right tool

Do not use it for general architecture summaries, routine code review, or non-security design work. If the request does not involve abuse cases, attack surface, or AppSec risk, the security-threat-model install will add more process than value.

How to Use security-threat-model skill

Install and inspect the repo

Run npx skills add openai/skills --skill security-threat-model to install the skill. After installation, read SKILL.md first, then open references/prompt-template.md and references/security-controls-and-assets.md to understand the expected output shape and the asset/control vocabulary used by the security-threat-model guide.

Give it the right input

Strong prompts name the repo, the in-scope path, the runtime shape, and any deployment facts you already know. For example: “Threat model services/api in this monorepo; it is internet-facing, uses JWT auth, stores user uploads, and calls a payment provider.” That is better than “review this code” because the skill needs scope, exposure, and trust assumptions to build a useful model.

How to invoke it well

The security-threat-model usage pattern is to ask for a repository-grounded threat model with evidence-backed claims, prioritized threats, and explicit open questions. If you have a repo summary, include it; if not, ask the skill to derive one and mark unknowns. A good prompt also tells it whether to emphasize runtime behavior, APIs, data handling, or supply-chain risk.

Best workflow and files to read

Start with SKILL.md to understand the workflow, then inspect references/prompt-template.md for the exact threat-model contract and references/security-controls-and-assets.md for the asset/control checklist. If the repo includes agents/openai.yaml or other support files, use them to align the output with the project’s preferred interface and wording.

security-threat-model skill FAQ

Is this only for AppSec teams?

No. It is useful for developers, platform engineers, security engineers, and reviewers who need a practical threat model before launch or during a design change. The output is still technical, but it is written to help implementation decisions.

How is this different from a normal prompt?

A normal prompt often produces a generic list of risks. The security-threat-model skill expects repo evidence, separates runtime from tests and tooling, and pushes for abuse paths instead of broad vulnerability trivia. That makes it better for security-threat-model usage when the goal is a defensible review rather than a brainstorming exercise.

Can beginners use it?

Yes, if they can describe the system and share a repo path or summary. Beginners get the best results when they name what the system does, who uses it, what data it handles, and whether it is exposed to the internet or other tenants.

When should I not install it?

Skip the security-threat-model install if you only need a high-level product explanation, a code walkthrough, or a quick architecture diagram. It is also a poor fit if you cannot provide enough repository context for grounded security analysis.

How to Improve security-threat-model skill

Give stronger scope and evidence

The biggest quality jump comes from precise scope: exact paths, entrypoints, data stores, external services, and deployment context. If you can provide a repo summary, architecture notes, or a list of user-facing endpoints, the security-threat-model skill can anchor threats to real components instead of guessing.

State attacker goals and boundaries

Tell it what you most want protected: customer data, auth tokens, internal admin actions, availability, or tenant isolation. Also call out trust boundaries such as browser-to-API, worker-to-queue, or service-to-third-party, because those boundaries drive the most useful abuse-path analysis.

Ask for prioritized, actionable output

If you want better results, request prioritization by likelihood and impact, plus mitigations that map to specific boundaries or components. That helps the skill produce a security-threat-model guide that engineers can act on, rather than a list of abstract concerns.

Iterate with missing details

After the first pass, feed back the unknowns that matter most, such as auth design, upload handling, background jobs, or multi-tenant assumptions. Iterating on those gaps usually improves the second output more than asking for more threats, because the model becomes less speculative and more implementation-ready.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...