pua-ja
by tanweaipua-ja is a Japanese-language escalation skill that pushes stalled agents to investigate harder, use tools before asking users, and verify results after repeated failures. Best for teams that want a trigger-based behavior layer for debugging, research, writing, and pua-ja for Context Engineering.
This skill scores 68/100: acceptable for listing because it gives agents a clear trigger pattern and a reusable behavioral framework for pushing past repeated failures, but directory users should treat it as a coaching/operating style prompt rather than a tightly specified workflow skill.
- Frontmatter description gives explicit trigger conditions, including repeated failure loops, premature 'cannot solve' responses, passivity, and user frustration cues.
- Substantial SKILL.md content defines concrete operating principles such as tool-first investigation, evidence-backed user questions, and proactive validation beyond the minimum task.
- Broad applicability across coding, debugging, research, writing, planning, operations, API integration, data analysis, and deployment increases reuse when an agent is stuck or underperforming.
- Repository evidence shows no support files, scripts, rules, or reference assets, so execution depends heavily on the prose being interpreted correctly by the agent.
- The skill appears more like a motivational/debugging methodology than a bounded task workflow, which may make outcomes inconsistent across agents and environments.
Overview of pua-ja skill
What pua-ja is for
pua-ja is a Japanese-language escalation skill for moments when an agent is stuck, getting passive, or about to give up too early. Its core job is not domain expertise by itself; it is to force a more relentless, evidence-driven recovery workflow across coding, debugging, research, writing, planning, and operations tasks.
Who should use pua-ja
The best fit for pua-ja is teams using AI agents in real work where weak default behavior is costly: repeated failed attempts, shallow debugging, premature “cannot solve,” or lazy handoff back to the user. It is especially relevant for pua-ja for Context Engineering because it changes agent behavior under pressure, not just output style.
What makes pua-ja different
Unlike a generic “try harder” prompt, pua-ja skill has explicit trigger conditions and a concrete behavioral model:
- activate after repeated failure or looped retries
- block unsupported excuses
- require tool use before asking the user
- push end-to-end ownership instead of narrow task completion
That makes it useful as an intervention layer when a normal system prompt is not enough.
What users care about before installing
Most users evaluating pua-ja install care about four things:
- whether it improves persistence without creating noise
- whether it helps on any task type, not just code
- whether the aggressive tone is culturally or operationally acceptable
- whether it adds a usable workflow, not just motivational language
On those points, the repository is strong on activation criteria and operator mindset, but light on supporting files, scripts, or examples. Expect a behavior framework more than a turnkey toolkit.
When pua-ja is a good fit
Use pua-ja when your agent:
- has already failed twice
- keeps tweaking the same approach without broadening the search
- wants to blame the environment without proof
- asks the user for information it could investigate itself
- produces narrow fixes without validation
When pua-ja is not a good fit
Do not reach for pua-ja skill on the first normal miss, during a straightforward known fix, or when the main problem is missing permissions, unavailable tools, or unclear business requirements. In those cases, a clearer task brief or better environment access will matter more than escalation pressure.
How to Use pua-ja skill
Install context for pua-ja
If your skill runner supports GitHub-hosted skills, add pua-ja from the tanweai/pua repository and then load the skills/pua-ja entry. The baseline example commonly used for this repo family is:
npx skills add tanweai/pua --skill pua-ja
If your environment uses a different loader, the practical target is the same: make the contents of skills/pua-ja/SKILL.md available to the agent at runtime.
Read this file first
Start with:
skills/pua-ja/SKILL.md
This repository snapshot exposes only one meaningful file for this skill, so there is no large support tree to inspect first. That is good for quick adoption, but it also means your team should decide upfront how to operationalize triggers and tone.
Understand the trigger before using pua-ja
The most important adoption detail is when to invoke pua-ja skill. The source text is designed for escalation, not default use. Practical trigger cases:
- two or more failed attempts
- repeated micro-adjustments to the same approach
- the agent starts saying “impossible,” “manual work required,” or similar without exhausting available evidence
- the agent is passive: not searching, not reading files, not testing
- the user explicitly signals frustration
If none of those are true, keep pua-ja inactive.
What input pua-ja needs
pua-ja usage works best when you provide:
- the concrete task goal
- what has already been tried
- current errors or symptoms
- available tools and permissions
- what “done” looks like
- constraints such as time, risk, or files the agent may modify
Without that context, the skill can push harder but still push in the wrong direction.
Turn a rough request into a strong pua-ja prompt
Weak:
- “Fix this.”
- “Try again.”
- “Work harder.”
Stronger:
- “Use
pua-jafor this stalled debugging task. We already tried restarting the service and changing env vars. Read the repo, inspect logs, test assumptions, and do not ask me to verify something you can verify yourself. Only ask me for information if it is unavailable through tools. Success means the endpoint returns 200 locally and the root cause is explained.”
That prompt works because it gives the skill a target, prior attempts, tool expectation, and success condition.
Example pua-ja usage pattern
A practical pua-ja guide for agent sessions is:
- summarize the current blockage
- list failed attempts
- instruct the agent to widen the search space
- require evidence before escalation to the user
- require verification before claiming completion
- ask for related-risk checks after the main fix
This mirrors the skill’s strongest value: replacing passive retries with systematic expansion and validation.
How pua-ja changes agent behavior
In practice, pua-ja skill should make the agent:
- inspect surrounding context, not just the visible error line
- search for similar patterns in nearby files
- test whether a fix generalizes
- verify the result with commands, tests, or output checks
- report what it investigated before asking the user anything
If your agent is already doing all of that, pua-ja may add tone more than net capability.
Best workflow for pua-ja for Context Engineering
For pua-ja for Context Engineering, the useful pattern is to frame it as a conditional escalation layer:
- keep a normal task prompt for baseline behavior
- add
pua-jaonly after failure thresholds are met - pass the full attempt history into the escalation prompt
- explicitly ask for broader search, proof gathering, and self-verification
This avoids overusing an intense style while still benefiting from the skill when the session starts degrading.
Practical prompt clauses that improve output
Add clauses like these when using pua-ja:
- “State what you checked before asking me anything.”
- “Do not attribute the issue to the environment without evidence.”
- “After fixing the immediate problem, check for adjacent instances of the same pattern.”
- “Verify with an actual command, test, or output, not just reasoning.”
These clauses align tightly with the source material and materially improve results.
Misuse patterns to avoid
Common bad pua-ja usage patterns:
- activating it from the first attempt
- using it as a substitute for missing context
- combining it with prompts that forbid tool use
- treating aggressive tone as proof of rigor
- asking for speed while also requiring exhaustive investigation
The skill is most effective when pressure is paired with access, evidence, and a clear success definition.
pua-ja skill FAQ
Is pua-ja only for coding?
No. The source explicitly positions pua-ja across all task types, including debugging, research, writing, planning, operations, API integration, and data work. The common thread is stalled execution and low initiative, not programming specifically.
Is pua-ja beginner-friendly?
Partly. pua-ja skill is easy to load because it is a single-file skill, but it assumes you can judge when escalation is appropriate. Beginners may misuse it as a default mode and end up with more forceful but not better output.
How is pua-ja different from an ordinary prompt?
A normal prompt might say “be proactive.” pua-ja goes further by defining failure triggers, banning premature surrender, requiring self-service investigation, and pushing verification. That structure is the main reason to choose it over ad hoc prompting.
Does pua-ja replace domain-specific skills?
No. pua-ja guide works best as a behavioral overlay. If you need framework-specific knowledge, deployment expertise, or research methodology, pair it with domain skills or better task context.
When should I not install pua-ja?
Skip pua-ja install if your main issue is tone sensitivity, compliance constraints around confrontational language, or lack of tool access. The skill helps least when the agent cannot actually inspect, test, or search.
Does pua-ja need extra repository files?
Not currently. Based on the available repository evidence, SKILL.md is the main artifact. That keeps adoption simple, but you should not expect bundled scripts, rules, or reference docs to operationalize the workflow for you.
How to Improve pua-ja skill
Give pua-ja better task state
The fastest way to improve pua-ja results is to provide a compact case file:
- objective
- observed failure
- attempts already made
- relevant files or URLs
- available tools
- verification command or acceptance test
This prevents the skill from spending effort rediscovering basics and increases the chance of useful escalation.
Pass attempt history, not just the latest error
pua-ja skill is built for repeated failure. If you hide the prior attempts, the agent cannot tell whether it is in a true escalation state or just starting normal diagnosis. Include what was tried and why it failed.
Ask for evidence-backed user questions
One of the best ways to sharpen pua-ja usage is to require a standard before the agent asks you for help:
- what it investigated
- what evidence it found
- why the remaining question cannot be answered with tools
That reduces low-value interruptions.
Force broader search after repeated failure
A common failure mode is “same method, tiny variation.” Improve pua-ja by explicitly instructing:
- change diagnostic angle after two failed attempts
- inspect adjacent files and logs
- check for similar incidents elsewhere in the repo
- test an alternative hypothesis, not just a parameter tweak
Require verification, not declarations
Another common failure mode is claiming completion without proof. For better pua-ja guide outcomes, ask the agent to validate with something concrete:
- tests
- build output
- API response
- reproduced-and-resolved error
- file diff plus runtime check
Adapt the tone to your environment
The repository’s voice is intentionally harsh. If that is useful for your internal workflow, keep it. If not, preserve the operational rules of pua-ja for Context Engineering while softening the phrasing. The value is in trigger discipline and proactive behavior, not mandatory verbal intensity.
Pair pua-ja with explicit stop conditions
To prevent over-investigation, define boundaries:
- max timebox
- acceptable fallback
- when to escalate to a human
- what level of confidence is required
This makes pua-ja more deployable in production workflows.
Iterate after the first pua-ja output
If the first escalated response is still shallow, do not just say “go deeper.” Give a correction with direction:
- “You still have not shown what files you inspected.”
- “You proposed environment issues without proof.”
- “You fixed one instance but did not search for related occurrences.”
- “You claimed success without running verification.”
That kind of feedback is much more effective than generic dissatisfaction.
