autoskill
by K-Dense-AIautoskill analyzes local Screenpipe activity to detect repeated research workflows, match them to existing scientific-agent-skills, and draft new skills or composition recipes. It is for Skill Authoring and requires a running screenpipe daemon on port 3030, with only redacted summaries sent to the model. Use autoskill when you want evidence-based skill ideas from real usage, not generic brainstorming.
This skill scores 78/100, which means it is a solid directory candidate: it has a clear trigger, a real workflow, and enough operational detail that users can judge fit before installing. For directory users, it looks useful if they want an agent to inspect local screen activity via screenpipe and propose new scientific-agent skills or composition recipes based on repeated work patterns.
- Explicit trigger and scope: it should be used when the user wants analysis of recent work and skill proposals based on observed workflows.
- Operationally clear dependency model: it requires a running local screenpipe daemon on port 3030 and says it will refuse to run if unavailable.
- Good agent leverage: it describes local detection with redacted cluster summaries to the LLM, which gives the agent a concrete process instead of a generic prompt.
- Adoption is gated on local infrastructure: users must already run screenpipe and provide one of the supported LLM backends or API keys.
- The repository evidence shows no support files or install command, so setup and usage may still require some manual interpretation despite the detailed SKILL.md.
Overview of autoskill skill
What autoskill does
autoskill analyzes your recent screen activity through Screenpipe, detects repeated research workflows, and turns those patterns into new skills or composition recipes. The autoskill skill is for Skill Authoring, not general note-taking: it is aimed at people who want to discover reusable workflows from their own behavior and capture them as installable skills.
Who it is for
Use autoskill if you already have a local Screenpipe setup and want to understand what you actually do often enough to merit a skill. It is most useful for power users, researchers, and skill maintainers who want evidence-based skill ideas instead of brainstorming from memory.
What makes it different
Unlike a generic prompt, autoskill depends on live local telemetry from screenpipe and refuses to run when that daemon is unavailable. That makes the autoskill install decision easy: if you want workflow mining from real usage, this is a fit; if you want a standalone writing assistant, it is not. Its main value is pattern detection plus skill matching, with only redacted summaries passed to the model.
How to Use autoskill skill
Install and runtime prerequisites
Install autoskill with:
npx skills add K-Dense-AI/claude-scientific-skills --skill autoskill
Before you try autoskill usage, confirm screenpipe is running locally on port 3030 and that your chosen LLM backend is configured. The skill expects authenticated access to http://localhost:3030 and to one LLM endpoint such as http://localhost:1234/v1, https://api.anthropic.com, or a BYOK Foundry gateway.
Start with the right input
The best autoskill guide prompt is specific about what period, workflow, or outcome you want analyzed. Strong input looks like: “Analyze my last 7 days of screen activity and identify repeated research workflows that could become new scientific-agent-skills.” Weak input like “suggest some skills” leaves too much room for shallow matches.
Best workflow for analysis
Begin by reading SKILL.md, then inspect README.md, AGENTS.md, metadata.json, and any rules/, resources/, references/, or scripts/ folders if they exist. In this repository, SKILL.md is the main source of truth, so the practical autoskill usage path is to verify prerequisites, then run a short analysis request, then review the proposed skill or composition recipe for fit before you adopt it.
What to provide for better output
Give autoskill the decision context it cannot infer: your target domain, the tools you use, the time window to inspect, and whether you want a new skill or a chain of existing ones. If you only want patterns from a single project, say so explicitly; if you want broader behavior mining, say that too. The more precise your boundaries, the better the skill matching and the less likely you are to get generic recommendations.
autoskill skill FAQ
Do I need Screenpipe to use autoskill?
Yes. autoskill has no alternate data source and depends on the local screenpipe daemon. If Screenpipe is not reachable, the skill should stop rather than guess.
Is autoskill a good fit for beginners?
It is usable by beginners who can install tools and describe a workflow goal, but it is most valuable when you already know what kind of reusable behavior you want to extract. If you are still exploring prompt basics, a simpler prompt may be easier than an autoskill install.
How is this different from a normal prompt?
A normal prompt asks an LLM to invent ideas from text alone. autoskill is a workflow-discovery tool: it inspects real screen activity, clusters repeated actions, and maps them to existing skill patterns before drafting something new.
When should I not use autoskill?
Do not use autoskill if you want offline-only behavior without Screenpipe, if you are uncomfortable connecting a model to local activity summaries, or if you need a one-shot answer instead of a repeated workflow analysis.
How to Improve autoskill skill
Feed it narrower, measurable goals
The fastest way to improve autoskill results is to constrain the search space. Ask for one category at a time, such as literature review, source triage, citation cleanup, or drafting. Broad requests tend to produce vague patterns that are harder to turn into a useful skill.
Use your first output as a filter
Treat the first autoskill pass as candidate generation, not final truth. Review whether the proposed skill is actually repeated, whether it saves time, and whether it fits your environment. If not, rerun with a tighter time window, a different project, or a stricter definition of “repeated.”
Watch for common failure modes
The main failure mode is overgeneralization: a few unrelated actions get merged into a fake “workflow.” Another is under-specifying the target output, which leads to skill ideas that are hard to install or reuse. When that happens, add examples of what success looks like and what should be excluded.
Improve the prompt, not just the data
For autoskill for Skill Authoring, the most useful follow-up is to tell it how you want the resulting skill packaged: as a standalone skill, a composition recipe, or a skill that chains existing scientific-agent-skills. That simple instruction changes the shape of the output more than asking for “better suggestions” ever will.
