critique
by NeoLabHQcritique is a report-only review skill that uses multiple specialized judges, debate, and consensus to assess completed work. It helps with critique for Code Review, correctness, quality, and missed issues before merging. Install critique in the NeoLabHQ context-engineering-kit and use it with file paths, commits, or context.
This skill scores 78/100, which means it is worth listing for Agent Skills Finder: it has enough concrete workflow content for agents to trigger and use with less guesswork than a generic prompt, though users should expect a report-only review workflow rather than automated remediation. The repository gives directory users a credible install decision basis because the SKILL.md includes a clear purpose, a default scope when no arguments are provided, and a detailed multi-phase critique process.
- Clear triggerability via optional file paths, commits, or context, with a default to recent changes.
- Substantial operational guidance: multi-agent debate, CoVe validation, and consensus-building workflow are spelled out.
- Low placeholder risk: valid frontmatter, no experimental/demo markers, and a large, structured SKILL.md body.
- No install command or supporting files, so adoption relies entirely on the SKILL.md being interpreted correctly.
- The skill is report-only; users needing automatic fixes or implementation steps will need another tool or prompt.
Overview of critique skill
What critique does
The critique skill is a review workflow for evaluating completed work with multiple specialized judges, debate, and consensus. It is designed for code review and other change review tasks where you want more than a single-pass opinion.
Who it is for
Use critique if you want an agent to assess correctness, quality, and missed issues before merging. It fits reviewers, maintainers, and builders who need a structured critique skill instead of a generic prompt for Code Review.
Why it matters
The main value is consistency under uncertainty: judges inspect the work independently, validate their own findings, and reconcile disagreements. That reduces shallow praise, blind spots, and one-dimensional feedback.
Where it fits best
This skill is strongest when the work already exists and the goal is to judge it, not to rewrite it. If you need implementation, refactoring, or auto-fixing, critique is the wrong tool.
How to Use critique skill
Install the critique skill
Install with npx skills add NeoLabHQ/context-engineering-kit --skill critique. The repo path is plugins/reflexion/skills/critique, so the skill is meant to be used inside that context-engineering kit rather than as a standalone utility.
Give the skill a clear review target
The critique usage pattern works best when you provide a concrete scope: changed files, a commit range, a PR link, or a specific concern. The built-in hint supports file paths, commits, or context, and defaults to recent changes only if you give nothing.
Start with the right files
Read SKILL.md first, then inspect any nearby workflow or metadata files in the repo. In this plugin there are no scripts/, references/, resources/, or rules/ helpers, so the core operating instructions live in the skill file itself.
Write prompts that define review intent
A stronger request sounds like: “Critique src/auth.ts and src/session.ts for security, regression risk, and test coverage gaps.” A weaker request like “review this code” leaves the judges guessing which standards matter, which lowers the value of the critique guide.
critique skill FAQ
Is critique only for code review?
No. It is broader than simple code review and can judge completed work, design decisions, or implementation quality. Still, it is best when the output should be a review report, not a patch.
How is critique different from an ordinary prompt?
A normal prompt usually produces one opinion. The critique skill adds a structured multi-agent process, independent validation, and consensus building, which is better when you care about reliability and competing interpretations.
Is critique beginner-friendly?
Yes, if the scope is specific. Beginners get better results when they name the exact files or change set and ask for the criteria that matter most, instead of expecting the skill to infer everything.
When should I not use critique?
Do not use it when you need edits applied automatically, when the task is trivial, or when the review scope is too vague to evaluate fairly. In those cases, a direct implementation prompt or targeted lint/test workflow is faster.
How to Improve critique skill
Provide stronger review criteria
The biggest quality jump comes from telling critique what to optimize for: correctness, security, performance, maintainability, API compatibility, or test completeness. Without that, the judges may over-index on obvious issues and miss the risks you actually care about.
Narrow the scope before asking
If the review target is large, split it into a commit, module, or feature slice. The critique skill works better when the input is bounded, because the debate can focus on real tradeoffs instead of summarizing the entire repository.
Include evidence the judges can check
Give relevant file paths, expected behavior, constraints, and any known failure modes. That helps the critique skill verify claims instead of guessing intent, which is especially important for critique for Code Review.
Iterate on the first report
Use the first pass to surface disagreements, then ask for a second critique on the highest-risk findings or the areas with weak evidence. That iterative loop turns the critique skill into a sharper decision aid rather than a one-shot summary.
