debugger
by zhaono1debugger is a structured debugging workflow for reproducing issues, isolating root causes, and verifying fixes with checklists, references, and a debug report script.
This skill scores 71/100, which means it is acceptable to list for directory users: it gives agents a clear debugging trigger and a reusable high-level workflow, but users should expect a fairly generic process rather than deeply opinionated execution guidance.
- Strong triggerability: SKILL.md explicitly activates on bugs, errors, unexpected behavior, and phrases like "debug this" or "help debug."
- Provides a structured debugging workflow with phases for reproduction, isolation, root-cause analysis, and fixing, plus references for checklist, error types, and patterns.
- Includes a practical support script (`scripts/debug_report.py`) that generates a debug report template, adding some reusable execution support beyond plain prompt text.
- Operational guidance stays broad and checklist-like; repository signals show limited constraints/practical detail, so agents may still need judgment similar to a generic debugging prompt.
- Install and adoption clarity is light: README says it is part of a collection, but SKILL.md has no install command and the included script example does not match the script's actual CLI flags.
Overview of debugger skill
What the debugger skill is for
The debugger skill is a structured debugging workflow for finding root causes faster than a generic “what’s wrong?” prompt. It is built for cases where code throws errors, behaves unexpectedly, regresses after changes, or fails only in certain environments. Instead of jumping straight to a fix, the debugger skill pushes a sequence that matters in real debugging work: reproduce, isolate, analyze, fix, and verify.
Who should install this debugger skill
This debugger skill fits developers, AI coding agents, and technical teams that want a repeatable process for Debugging rather than ad hoc troubleshooting. It is especially useful if you often work from stack traces, logs, partial bug reports, or uncertain repro steps. It is less about deep framework-specific expertise and more about improving debugging discipline across projects.
What job it helps you get done
The real job-to-be-done is not “explain an error message.” It is to turn a vague failure into a clean investigation path: what changed, how to reproduce it, where to narrow scope, which evidence to gather, and how to verify the final fix. That makes this debugger install more valuable when a team is losing time to guesswork or repeatedly fixing symptoms instead of causes.
Why this debugger skill stands out
The useful differentiator is its operational shape. The repository includes:
- a phased debugging workflow in
SKILL.md - quick-reference debugging aids in
references/checklist.md,references/errors.md, andreferences/patterns.md - a practical report generator in
scripts/debug_report.py
That combination makes the debugger skill better for live incident-style work than a plain prompt template. It gives you a process, a checklist, common failure categories, and a handoff artifact.
What it does not try to do
This is not a language-specific debugger, IDE extension, or tracing platform. It will not replace runtime tools, profilers, or framework docs. If your main need is interactive stepping, memory inspection, or protocol-level tracing, use those tools directly and treat this debugger guide as the reasoning layer around them.
How to Use debugger skill
Install context and repo path
The skill lives at skills/debugger inside zhaono1/agent-playbook. If you use a skill loader that supports GitHub sources, install from the repository and target the debugger skill. A common pattern is:
npx skills add https://github.com/zhaono1/agent-playbook --skill debugger
If your setup differs, the important part is loading the skills/debugger directory so the agent can access SKILL.md plus the supporting references/ and scripts/ files.
Read these files first
For fast adoption, read in this order:
skills/debugger/SKILL.mdskills/debugger/references/checklist.mdskills/debugger/references/patterns.mdskills/debugger/references/errors.mdskills/debugger/scripts/debug_report.py
This path mirrors actual debugger usage: workflow first, then investigation heuristics, then error categories, then documentation support.
How the debugger skill is triggered in practice
The repository is designed to activate when a user reports:
- an error or exception
- unexpected behavior
- “debug this”
- “why isn’t this working?”
In practice, the debugger skill works best when you explicitly frame the request as a debugging task and give evidence. Example:
“Use the debugger skill. This API returns 500 only in staging. Expected 200. Started after yesterday’s deploy. Here is the stack trace, the endpoint, and the last 3 commits.”
That prompt is much stronger than “fix this bug.”
What input the debugger skill needs
Good debugger usage depends on concrete inputs. Provide as many of these as you can:
- exact error text
- stack trace
- expected vs actual behavior
- reproducible steps
- recent code or config changes
- environment details
- relevant logs
- narrowed file or component scope
The skill’s workflow assumes evidence gathering, so missing repro steps or missing actual output will reduce output quality more than missing implementation detail.
Turn a rough request into a strong debugger prompt
Weak prompt:
“Why does this fail?”
Stronger prompt:
“Use the debugger skill to diagnose this failure. After upgrading dependencies, npm test fails in auth.spec.ts with TypeError: Cannot read properties of undefined. Expected tests to pass. Actual behavior: 6 failures on CI, 0 locally. Recent changes: lockfile update and config edit. Please help reproduce, isolate likely causes, rank hypotheses, and suggest the smallest safe fix.”
Why this works:
- names the debugging goal
- gives expected vs actual behavior
- includes environment mismatch
- includes recent changes
- asks for investigation before patching
Suggested debugger workflow
A practical debugger guide for real usage:
- Reproduce the issue exactly.
- Capture expected vs actual behavior.
- Check recent changes with
git log --oneline -10. - Gather logs or traces.
- Isolate with a minimal repro or binary search.
- Map the failure to an error category.
- Form root-cause hypotheses.
- Test the smallest likely fix.
- Verify with regression coverage.
This is mostly what the skill encodes, but following it explicitly helps when the agent starts proposing fixes too early.
Use the reference files as decision aids
The support files are short, but they change output quality:
references/checklist.mdkeeps the session honest: reproduce, isolate, root cause, fix, regression coverage.references/patterns.mdis useful when the issue is broad or noisy; it suggests binary search, targeted logging, and minimal repro reduction.references/errors.mdhelps classify common failures like null access, race conditions, config mismatches, and data shape drift.
Use them when the first debugger output feels generic. They are better for sharpening the investigation path than for learning syntax.
Generate a reusable debug report
If you want a documented investigation artifact, use:
python skills/debugger/scripts/debug_report.py --name "Checkout timeout in staging" --owner payments
This creates a markdown report template with sections for summary, environment, repro steps, logs, root cause, fix, regression tests, and follow-ups. For team debugging, this is one of the most practical parts of the repository because it converts ephemeral investigation into something reviewable.
Best use cases for debugger for Debugging
This debugger skill is most useful when:
- the bug is reproducible but not obvious
- logs exist but are noisy
- the failure started after a change
- the problem spans code, config, and environment
- you need a disciplined triage flow before editing code
It is less compelling for tiny syntax mistakes you can spot instantly or for domain-specific incidents that require proprietary operational context the agent cannot access.
Practical tips that improve debugger usage
Ask the skill to separate:
- facts
- hypotheses
- next checks
- proposed fix
- verification steps
That structure prevents premature certainty. Also ask it to rank likely causes and to say what evidence would falsify each one. This turns the debugger skill from “smart guesser” into a better investigation partner.
debugger skill FAQ
Is this debugger skill better than a normal prompt
Usually yes, when the issue is multi-step. A generic prompt often jumps from symptom to guessed fix. The debugger skill is better when you need systematic narrowing, evidence gathering, and verification. If the bug is trivial and fully visible in one snippet, a normal prompt may be enough.
Is the debugger install beginner-friendly
Yes, because the core workflow is simple and concrete. Beginners benefit from the phased process and checklist. The main catch is that the skill assumes you can provide some evidence, such as logs, stack traces, or repro steps. Without those, any debugger guide becomes guess-heavy.
Can I use this debugger skill with any language or stack
Mostly yes. The debugger skill is process-oriented, not tied to one language. Its error examples lean general rather than framework-specific. That makes it portable, but it also means you may need to add stack-specific details yourself for best results.
When should I not use this debugger skill
Skip it when:
- you need interactive runtime debugging more than reasoning help
- the issue is purely operational and the agent cannot access the system
- the bug is a one-line typo already identified
- you need vendor-specific expertise that the repository does not contain
In those cases, use direct tooling or domain docs first.
Does it help with team handoff and incident follow-up
Yes. The debug_report.py script is the strongest sign that this debugger skill was designed for more than one-off chats. It helps convert a debugging session into a reusable report with ownership, repro steps, root cause, fix, and follow-ups.
How to Improve debugger skill
Give the debugger skill evidence, not just symptoms
The fastest way to improve debugger output is to include raw evidence:
- exact command run
- full error text
- failing input
- environment where it breaks
- recent commit range
- what you already tried
“Here is the stack trace and the last good commit” is far better than “it’s broken after my changes.”
Force a minimal repro early
A common failure mode in debugger usage is investigating too much surface area. Ask the skill to help create the smallest reproducible case. This often removes noise from framework setup, unrelated services, or stale state and makes root causes appear faster.
Ask for hypothesis ranking
When multiple causes are plausible, tell the debugger skill to rank them by likelihood and by ease of verification. That gives you a better investigation order. Example:
“List the top 3 root-cause hypotheses, what evidence supports each, and the next cheapest check to confirm or reject them.”
This is especially useful for flaky tests, integration failures, and config drift.
Separate root cause from fix quality
Another common issue is accepting the first fix that makes the symptom disappear. Use the debugger guide to ask:
- why this happened
- what condition allowed it
- what regression test should prove it stays fixed
That matters for recurring issues like null handling, race conditions, and mismatched config.
Improve the first output with repository context
If the bug is in your own codebase, provide:
- suspected files
- package or service boundary
- deploy timing
- config files involved
- whether the issue is local, CI, staging, or production only
The debugger skill is much better when it can connect evidence to system boundaries instead of reasoning from a pasted stack trace alone.
Use the references to sharpen weak answers
If the first answer feels generic, explicitly tell the agent to use:
references/checklist.mdfor process completenessreferences/patterns.mdfor isolation methodsreferences/errors.mdfor error-family matching
This is a practical way to improve debugger results without rewriting the whole prompt.
Iterate after the first debugging pass
Good debugger usage is iterative. After the first output:
- run one suggested check
- bring back the result
- ask the skill to update hypotheses
- only then edit code
This loop is where the debugger skill becomes more useful than a static debugger guide. It helps you converge instead of generating one large, speculative answer.
Add regression proof before closing
The repository checklist explicitly includes regression coverage, and that is the right place to end. Ask the debugger skill to propose the smallest test, assertion, or monitoring check that would catch the issue next time. A fix without verification is usually incomplete Debugging, especially for intermittent or environment-dependent bugs.
