T

pua-en is a GitHub skill for escalating stalled AI work with structured troubleshooting, stronger initiative, and clear trigger rules. Use it after repeated failures, passive investigation, or debugging dead ends. Review SKILL.md, install from tanweai/pua, and apply it to code, config, deployment, API, and research tasks when normal prompting is not enough.

Stars0
Favorites0
Comments0
AddedMar 31, 2026
CategoryDebugging
Install Command
npx skills add tanweai/pua --skill pua-en
Curation Score

This skill scores 68/100, which means it is listable for directory users as a real, reusable prompting aid, but it is better suited as a behavioral escalation framework than as a tightly operational skill. The repository gives strong trigger guidance and substantial written content, so an agent can recognize when to invoke it after repeated failure or passivity. However, install-decision clarity is limited by the lack of support files, executable workflow artifacts, or a concise quick-start that shows exactly how the skill changes behavior in practice.

68/100
Strengths
  • Very clear trigger conditions in frontmatter, especially for repeated failure, passivity, and user frustration signals.
  • Substantial non-placeholder documentation with structured sections and code fences, indicating real workflow intent rather than a stub.
  • Broad applicability across coding, debugging, research, writing, deployment, and API work can make it a reusable recovery/escalation skill.
Cautions
  • Mostly rhetoric and process guidance; no scripts, resources, rules files, or install command to reduce execution guesswork.
  • Broad 'all task types' positioning may make invocation feel subjective without concrete examples of before/after behavior.
Overview

Overview of pua-en skill

What pua-en is for

The pua-en skill is a pressure-and-process prompt for moments when an AI agent is stalling, giving up too early, or repeating weak attempts without doing real investigation. It is built around a blunt “performance improvement plan” framing, but the practical value is not the rhetoric alone: it pushes exhaustive troubleshooting, stronger initiative, and a more systematic debugging loop.

Best-fit users and jobs-to-be-done

The best fit for pua-en skill users is anyone who has watched an agent:

  • fail the same task multiple times,
  • blame the environment without checking it,
  • stop at “I can’t,”
  • avoid reading source material, logs, configs, or docs,
  • or respond passively when the task clearly needs active investigation.

It is especially relevant for pua-en for Debugging, config failures, deployment issues, API integration problems, and “figure it out” moments where ordinary prompting has not changed the agent’s behavior.

What makes pua-en different from a normal retry

A normal retry prompt often just asks the model to “try again.” pua-en adds a specific trigger condition and a stronger operating stance: do more checking, search more broadly, read more artifacts, verify before blaming constraints, and keep initiative high until real options are exhausted. That makes it more useful when the core problem is not knowledge alone, but weak effort quality.

When pua-en is a poor fit

Do not reach for pua-en on the first failed attempt, and do not use it when a known fix is already in progress. If the task is simple, routine, or already moving forward with a good plan, the skill can add unnecessary intensity instead of better output.

How to Use pua-en skill

Install context for pua-en

The repository exposes the skill at skills/pua-en in tanweai/pua. If your skill runner supports GitHub-hosted skills, use your standard add flow against that repo and select pua-en. A common pattern is:

npx skills add tanweai/pua --skill pua-en

If your environment uses a different loader, the important install decision is simple: this skill is self-contained and the main file to inspect is SKILL.md.

Read this file first

For pua-en install review and adoption, start with:

  • skills/pua-en/SKILL.md

This repository snapshot shows no extra rules/, resources/, or helper scripts for this skill, so nearly all of the operational logic lives in that one file. That is good for quick evaluation, but it also means your results depend heavily on how well you trigger and frame the skill.

Know the trigger conditions before invoking it

Use pua-en usage when one or more of these are true:

  • the agent has already failed twice,
  • it is stuck making small variations of the same attempt,
  • it is drifting toward “manual workaround” without verifying alternatives,
  • it is not reading code, logs, config, docs, or error output proactively,
  • the user is explicitly frustrated and wants the agent to push harder.

Avoid triggering it on first contact with a problem. The skill is designed as an escalation layer, not a default tone for every task.

What input pua-en needs to work well

The skill performs best when you provide the actual working surface, not just a vague complaint. Strong inputs include:

  • the goal,
  • what has already been tried,
  • current errors or symptoms,
  • relevant files, logs, stack traces, or command output,
  • constraints such as access limits, runtime, deployment target, or tools available.

Weak input: “Deployment is broken. Fix it.”

Stronger input: “Our docker compose up fails after the API container starts. Error: ECONNREFUSED to Postgres. I already confirmed the DB container is healthy. Here is docker-compose.yml, the app .env, and the startup logs.”

The second version gives pua-en something to investigate systematically instead of forcing it to guess.

Turn a rough request into a better pua-en prompt

A practical pua-en guide prompt usually has four parts:

  1. state the outcome,
  2. state failed attempts,
  3. provide evidence,
  4. require active verification before conclusions.

Example:

Use pua-en. We have already tried two fixes and are still stuck. Do not suggest manual workarounds until you inspect the likely causes. Read the error output and config below, list concrete hypotheses, test them against the evidence, and propose the next highest-confidence fix.

This matters because the skill is strongest when paired with visible evidence and explicit expectations for initiative.

Best workflow for pua-en for Debugging

A good workflow is:

  1. let the agent attempt normally,
  2. detect repeated failure or passivity,
  3. invoke pua-en for Debugging,
  4. make the agent restate the problem, evidence, and hypotheses,
  5. require it to check source artifacts before concluding,
  6. review whether the next step is genuinely new, not a reworded repeat.

The gain from pua-en skill comes from changing behavior under pressure, not from blindly pasting the same prompt after every error.

What the skill is actually trying to enforce

From the source, the core themes are:

  • exhaustive option search,
  • stronger proactivity,
  • structured troubleshooting,
  • refusal to give up early,
  • explicit self-checking after task work.

In practice, that means you should expect the agent to inspect more evidence, propose more than one plausible cause, and avoid premature claims that something is impossible.

Practical tips that improve output quality

To get better pua-en usage results:

  • include exact error text instead of paraphrasing,
  • include the current file or config snippet, not only a summary,
  • tell the model what has already been ruled out,
  • ask for ranked hypotheses, not a single guess,
  • ask it to explain why each next step is higher-value than alternatives.

These inputs reduce fake confidence and make the skill’s “try harder” posture more productive.

Common adoption tradeoffs

The main tradeoff is tone. pua-en uses aggressive performance-culture rhetoric to push effort quality. Some teams will find that motivating; others will find it distracting or culturally mismatched. If your workflow values calm, neutral collaboration, install only if the underlying methodology is worth the tone.

The other tradeoff is scope: the skill is broad enough for coding, research, writing, ops, and API work, but its strongest use case is still stubborn troubleshooting rather than greenfield ideation.

How to evaluate pua-en quickly before team-wide use

A fast evaluation path:

  1. open SKILL.md,
  2. skim the trigger conditions in the description,
  3. inspect the “Three Non-Negotiables,”
  4. test it on one real stuck task,
  5. compare output against your normal escalation prompt.

If the model becomes more investigative, less passive, and less likely to give up without evidence, pua-en install is probably justified.

pua-en skill FAQ

Is pua-en only for software debugging?

No. The source explicitly positions pua-en for code, config, research, writing, planning, ops, API integration, deployment, and similar work. Still, the highest-value fit is usually debugging-like situations where the real issue is low initiative or shallow investigation.

Is pua-en beginner-friendly?

Yes, with one caveat: beginners can use pua-en skill, but they still need to provide context. The skill cannot compensate for missing logs, absent requirements, or no reproducible symptoms. It helps the agent work harder and more systematically; it does not magically create evidence.

When should I not use pua-en?

Do not use pua-en:

  • on the first failed attempt,
  • when the agent is already executing a sound fix,
  • when the task is simple and not blocked,
  • when the rhetoric will create more friction than value.

If the issue is missing access, missing files, or unclear user requirements, solve that first.

How is pua-en different from just saying “try harder”?

“Try harder” gives pressure without method. pua-en guide behavior combines pressure with a troubleshooting frame: inspect, verify, search, test hypotheses, and avoid passive waiting. That usually produces better output than a generic frustration prompt.

Does pua-en require extra repo files or scripts?

No major support files are surfaced for this skill in the repository preview. For adoption, assume SKILL.md is the authoritative source. That keeps setup simple, but it also means you should read the skill text directly rather than expecting external automation.

Can pua-en replace normal prompting?

No. pua-en is an escalation tool, not a default operating mode. Use your normal prompt first. Bring in this skill when the failure mode is repeated underperformance, not whenever you want a standard answer.

How to Improve pua-en skill

Give pua-en better evidence, not more emotion

The biggest quality lever is not harsher wording. It is better task material. If you want pua-en to produce stronger results, provide:

  • exact failure output,
  • the relevant file path or snippet,
  • prior attempts and their outcomes,
  • what success looks like,
  • hard constraints.

That turns the skill from motivational pressure into a useful investigative loop.

Ask for hypothesis-driven output

A strong improvement pattern is to require the model to produce:

  1. observed facts,
  2. candidate causes,
  3. tests or checks,
  4. recommended next action.

This matches what pua-en skill is trying to enforce and makes it easier to see whether the model is genuinely reasoning or just sounding determined.

Watch for repeated low-value retries

A common failure mode is fake persistence: the agent keeps generating new wording around the same idea. If that happens, tell it explicitly:

  • do not repeat prior fixes,
  • identify what new evidence would change the diagnosis,
  • inspect a different layer such as config, runtime, dependency, permissions, or environment.

This is one of the most practical ways to improve pua-en for Debugging outcomes.

Add boundaries so the skill does not overrun the task

Because pua-en pushes exhaustive effort, it can drift into overly long investigations. Improve results by setting boundaries such as:

  • “give the top 3 hypotheses only,”
  • “prioritize checks that do not require production access,”
  • “propose the fastest verifiable fix first,”
  • “stop after one high-confidence plan.”

This preserves the skill’s initiative while keeping output decision-friendly.

Iterate after the first pua-en response

Do not judge the skill from one pass alone. A good second-round instruction is:

Reassess using the evidence we now have. Remove disproven hypotheses, keep only what remains plausible, and propose the next best step with justification.

That helps pua-en usage stay evidence-led instead of escalating into theatrical persistence.

Improve team adoption with a light wrapper

If the tone is too strong for your environment, keep the structure and soften the wrapper. The repository’s value is the insistence on initiative, verification, and exhaustive search. You can preserve those behaviors while adjusting presentation for your team’s style, as long as the operational expectations stay explicit.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...