distill-mentor
by ybq22distill-mentor turns public academic data into a reusable mentor-style skill. It supports browser-first collection, deep paper analysis, bilingual output, and saved artifacts under ~/.claude/mentors/ and ~/.claude/skills/.
This skill scores 68/100, which means it is listable for directory users because it describes a real, user-invocable workflow with meaningful outputs, but adopters should expect some operational guesswork and repo inconsistency before installing.
- SKILL.md gives explicit trigger phrases, argument format, allowed tools, and expected outputs under `~/.claude/mentors/` and `~/.claude/skills/`.
- The repo includes substantial workflow documentation beyond a stub, including `QUICKSTART.md`, usage guides, changelog notes, and examples of browser-search and deep-analysis behavior.
- It offers concrete agent leverage over a generic prompt by defining a multi-step mentor distillation process: collect sources, analyze papers/style, score data quality, and generate a conversational mentor skill.
- Install and execution clarity is uneven: structural signals show no install command in `SKILL.md`, while docs reference scripts like `test-puppeteer.js` and `test-comprehensive-search.js` that are not visible in the provided tree.
- Trustworthiness is reduced by internal inconsistencies such as the repo slug `supervisor` vs skill name `distill-mentor`, plus docs claiming production readiness and file paths/scripts that do not fully align with the visible repository layout.
Overview of distill-mentor skill
What distill-mentor does
The distill-mentor skill turns a real academic mentor into a reusable AI persona by collecting public information, analyzing papers and style, and generating a mentor-style skill you can talk to later. It is built for users who want more than a one-off prompt: students comparing advisors, researchers studying a lab’s research taste, and educators creating a shareable digital mentor.
Who should install distill-mentor skill
This distill-mentor skill is best if you need structured mentor synthesis, not just a summary. It fits users who care about research direction, methodology preferences, communication style, and academic philosophy. If you only need a quick bio or a paper list, a normal prompt is faster. If you want an artifact saved to ~/.claude/mentors/ and a generated skill under ~/.claude/skills/, this is a better fit.
What makes it different
The main differentiator is depth. The repository documents a browser-first collection flow, fallback search behavior, bilingual support, and deeper paper analysis in docs/DEEP_ANALYSIS_GUIDE.md. Compared with generic prompting, distill-mentor for Agent Orchestration gives you a defined trigger, expected outputs, and a repeatable workflow for creating mentor-like assistants from public evidence rather than ad hoc imitation.
How to Use distill-mentor skill
distill-mentor install and first run
In Claude Code or a compatible skill runtime, add the repo and invoke the skill directly. A practical starting point is:
npx skills add ybq22/supervisor/distill-mentor "Geoffrey Hinton" --affiliation "University of Toronto"- Optional quick mode:
/distill-mentor "Geoffrey Hinton" --no-browser
The documented default is browser search, with fallback to DuckDuckGo-style collection if browser search fails. The repo notes Node.js >=18, and the browser path may pull in Chromium via puppeteer, which matters for environment size and CI-like installs.
Inputs that improve distill-mentor usage
The skill works best when you provide:
- full mentor name
- affiliation when the name is ambiguous
- language context in your first message
- your actual job-to-be-done
A weak prompt is: distill Geoffrey Hinton.
A stronger prompt is: Create a distill-mentor profile for Geoffrey Hinton at University of Toronto. I care most about his research evolution, supervision style, and how he frames risky ideas for PhD students.
That stronger input improves retrieval disambiguation and tells the analyzers what to emphasize in the generated mentor persona.
Best workflow and files to read first
For a fast adoption decision, read in this order:
QUICKSTART.mdfor commands, modes, output paths, and quality scoringSKILL.mdfor trigger conditions, allowed tools, and runtime behaviordocs/DEEP_ANALYSIS_GUIDE.mdfor what “deep analysis” actually extractsdocs/CHANGELOG.mdto understand the browser-first shift and--no-browser
Then inspect prompts/intake.md, prompts/analyzer.md, prompts/style-analyzer.md, prompts/deep-paper-analyzer.md, and prompts/builder.md if you want to tune outputs rather than just run the default flow.
Practical constraints and output expectations
Expect two tradeoffs. First, quality depends on public footprint: well-known academics with papers, talks, and homepage material produce better results than low-visibility mentors. Second, browser-based collection is slower but richer; --no-browser is quicker but less complete. The repo’s own quickstart frames quality as data-dependent, so if a mentor scores low or outputs feel generic, provide affiliation, known papers, or extra source context before judging the skill.
distill-mentor skill FAQ
Is distill-mentor better than a normal prompt?
Usually yes, when you need consistency and saved outputs. A generic prompt can imitate a mentor voice, but distill-mentor usage is stronger for evidence-backed synthesis because it separates intake, source gathering, paper analysis, style analysis, and skill building. That structure reduces guesswork and makes later reuse easier.
When should I not use distill-mentor skill?
Skip it if the target has little public material, if you need guaranteed factual completeness, or if your use case is simple summarization. It is also not the right tool for private institutional records unless you can legally and technically provide those materials through your own workflow.
Is it beginner-friendly?
Reasonably. The command surface is simple, especially from QUICKSTART.md. The main beginner friction is environment setup around browser search and understanding why one mentor produces better results than another. If you want the easiest path, test one famous researcher first, then move to less visible targets.
Does distill-mentor fit wider agent workflows?
Yes. distill-mentor for Agent Orchestration makes sense when one agent gathers evidence, another analyzes style, and another packages the result into a reusable mentor skill. The repo’s prompt files and staged analysis make it easier to split responsibilities than with a monolithic prompt.
How to Improve distill-mentor skill
Give distill-mentor richer disambiguation signals
The single highest-leverage improvement is better input. Add affiliation, field, a known paper, or a lab name when the mentor has a common name. Example: Distill Fei-Fei Li, Stanford, focus on computer vision leadership, student-facing advice style, and how she connects technical work to broader impact. This reduces wrong-source retrieval and improves the generated mentor’s tone and priorities.
Steer for the output you actually need
Tell the skill what kind of mentor artifact you want:
- advisor-style critique
- research direction guidance
- writing feedback voice
- lab culture and philosophy
- methodology preferences
Without that, outputs can drift toward generic academic biography. The prompt files suggest the system can extract research themes, methodology, presentation style, and public presence, so specify which dimensions matter most for your downstream use.
Handle common failure modes early
Common issues are name ambiguity, thin evidence, overfitting to famous talks, and shallow style imitation from a few papers. If the first result feels broad but not mentor-like, switch from quick mode to default browser mode, add affiliation, and ask for emphasis on recent papers versus legacy reputation. If public web results dominate, anchor the run around paper analysis rather than biography.
Iterate after the first output
The best distill-mentor guide workflow is two-pass:
- generate the initial mentor
- refine based on gaps
Useful follow-ups:
Rebuild this distill-mentor with more weight on recent publications from 2022 onwardReduce biography and increase supervision-style cuesCompare methodological preferences across early, mid, and recent papersList weak evidence areas before regenerating the mentor skill
This turns the skill from a one-shot generator into a controllable pipeline, which is where it beats ordinary prompting most clearly.
