reflect
by NeoLabHQreflect is a Skill Validation tool for reviewing a prior response or output. It uses complexity triage and verification to catch missed flaws, weak reasoning, and overconfident approval before work ships.
This skill scores 63/100, which means it is worth listing but only as a limited, cautionary install for users who specifically want a self-reflection / quality-gate workflow. The repository shows a substantial, non-placeholder skill with valid frontmatter, a clear purpose, and many workflow/constraint sections, but it lacks supporting files and an install command, so directory users will need to inspect the SKILL.md closely before adopting it.
- Substantial skill body with many headings and workflow/constraint signals, suggesting real operational content rather than a stub.
- Frontmatter is valid and the trigger is explicit: reflect on a prior response/output using an iterative self-refinement framework.
- No placeholder markers or experimental/test-only signals were found, which supports basic trustworthiness.
- No install command or supporting resources/files are provided, which makes adoption less turnkey for directory users.
- The tone is highly opinionated and adversarial, so it may fit quality-gating use cases better than general-purpose reflection.
Overview of reflect skill
What reflect is for
reflect is a Skill Validation skill for second-pass review: it takes a completed or near-complete response and pressure-tests it for missed flaws, weak reasoning, or overconfident approval. The reflect skill is most useful when you need a fast but skeptical quality gate, not a fresh solution.
Who should install it
Use reflect if you review AI-generated work, ship production-facing answers, or need a disciplined “should this pass?” check. It fits agents that can supply the prior output plus the task context. If you want brainstorming or drafting help, this is the wrong skill.
What makes it different
The skill is built around complexity triage, confidence checks, and verification-minded feedback. That means reflect install is mainly valuable when you want the model to decide how deep to inspect, then focus scrutiny where failure is most likely. It is less about style polish and more about catching defects before they spread.
How to Use reflect skill
Install reflect and point it at a prior answer
Install the reflect skill in your agent environment, then invoke it with the target output you want reviewed. The repo’s own install pattern is npx skills add NeoLabHQ/context-engineering-kit --skill reflect. For best results, include the original prompt, the draft response, and any acceptance criteria.
Give it the right input shape
reflect works best when the input names the task, the stakes, and the confidence threshold. A strong prompt looks like: “Reflect on this deployment note for correctness and omitted risks; deep reflect if less than 90% confidence.” A weak prompt is only: “Check this.” The more explicit your pass/fail criteria, the more useful the review.
Read these files first
Start with SKILL.md; it contains the core rules, identity, and triage logic that determine how the skill behaves. If you are adapting the skill in a larger kit, also inspect README.md, AGENTS.md, and any repo-wide policy files so the reflection step matches your actual workflow. In this repository, SKILL.md is the main source of truth.
Use the skill as a review gate
A practical reflect usage workflow is: draft response, run reflect, then revise only the parts the review identifies as risky. Do not ask it to re-author everything unless the original output is unusable. The best use of reflect guide is a narrow one: verify claims, surface missing constraints, and decide whether the draft is safe to ship.
reflect skill FAQ
Is reflect a general writing prompt?
No. reflect is not meant to produce the first draft; it is meant to evaluate one. If you use it as a normal generation prompt, you lose the main advantage of the reflect skill: disciplined scrutiny after the fact.
When is reflect a bad fit?
It is a poor fit when there is no prior answer to assess, when the task is purely creative, or when you need broad ideation rather than rejection-focused review. It is also less useful if you cannot provide enough context to judge correctness or completeness.
Is reflect beginner-friendly?
Yes, if the user can provide a draft and a goal. You do not need to know the whole repository to use it, but you do need to say what “good” means. For beginners, the biggest win is simply making the review criteria explicit before calling reflect for Skill Validation.
How does it compare with an ordinary prompt?
An ordinary prompt usually asks for a solution. reflect asks for a critique of that solution under uncertainty, with stronger attention to gaps and false confidence. That makes it better for QA, acceptance checks, and high-stakes outputs than for first-pass generation.
How to Improve reflect skill
Tighten the evidence you give it
The strongest reflect results come from concrete inputs: the original task, the draft answer, and the failure modes you fear most. If the work is technical, include constraints, edge cases, and the target audience. If the work is policy or editorial, include the rules it must satisfy.
Ask for the right depth
Use the skill’s confidence trigger deliberately. If the draft is simple, ask for a quick check; if it is ambiguous or high-risk, request deep reflection and explicit rejection criteria. This keeps reflect from overanalyzing easy cases or underchecking risky ones.
Watch for common failure modes
The usual problems are vague approval, ungrounded criticism, and missing verification against the actual constraints. To improve reflect skill output, ask it to quote the specific issue, explain why it matters, and state whether the draft should pass unchanged, be revised, or be rejected.
Iterate after the first review
Treat the first reflection as a triage pass, not the final verdict. Revise the draft, then rerun reflect on the updated version to confirm the fixes actually closed the gaps. This is where the skill earns its value: fewer false approvals, clearer rework, and a stronger final gate.
