lesson-learned
by softaworkslesson-learned analyzes Git diffs and recent commits to extract software engineering lessons grounded in real code changes. It loads `se-principles.md` first, maps changes to principles like SRP, DRY, and KISS, and works well for retrospectives, PR learning notes, and Code Review follow-up.
This skill scores 78/100, which means it is a solid directory listing candidate for users who want code-change reflection grounded in actual git history rather than generic advice. It is easy to trigger, has meaningful repository-backed analysis structure, and provides enough clarity to justify installation, though users should expect some setup and execution details to be inferred from the surrounding toolkit.
- Highly triggerable: the frontmatter and README give explicit trigger phrases and clear use cases tied to reflection on recent code changes.
- Grounded leverage beyond a generic prompt: it requires loading a principles catalog and uses git-history-based scope selection, diff review, and principle mapping from repository references.
- Trustworthy supporting material: dedicated references for software engineering principles and anti-patterns make the analysis more specific, balanced, and reproducible.
- No install or quick-start command is provided in SKILL.md, so adoption depends on users already understanding the host toolkit setup.
- Execution still requires agent judgment for scope inference and selective file reading; the excerpts show defaults and constraints, but not a tightly scripted end-to-end procedure.
Overview of lesson-learned skill
The lesson-learned skill turns recent Git activity into concrete software engineering takeaways. Instead of giving abstract advice, it inspects real diffs, commit history, and changed files, then maps what happened to named principles like SRP, DRY, KISS, YAGNI, and related anti-patterns. This makes lesson-learned most useful for developers who have already changed code and want to answer: what did this work teach us, what trade-off did we make, and what should we repeat or avoid next time?
Who the lesson-learned skill is for
Best fit readers are:
- developers finishing a feature, refactor, bug fix, or cleanup
- reviewers who want a learning-oriented summary after a PR
- team leads building lightweight engineering reflection habits
- agents that need to explain the principle behind recent code changes
If you want design reflection grounded in actual repository history, the lesson-learned skill is stronger than a generic “review this code” prompt.
What job this skill actually does
The core job is not code review in the usual pass/fail sense. lesson-learned looks backward at completed or in-progress work and extracts 1–3 lessons supported by the diff. Good outputs usually include:
- the principle name
- how the change demonstrates it
- why it matters
- a next-step recommendation
That framing makes it especially useful for retrospection, mentoring, and PR learning notes.
What differentiates lesson-learned from a generic prompt
Two things matter most:
- It is Git-history-driven, so it analyzes real changes instead of hypothetical snippets.
- It requires a principle catalog, especially
references/se-principles.md, which gives the model a vocabulary for naming patterns consistently.
That combination helps the skill produce lessons that feel earned by the code, not pasted from a software engineering textbook.
When not to choose lesson-learned
Skip lesson-learned if your actual goal is:
- line-by-line bug finding before merge
- security auditing
- style-only lint feedback
- architecture planning without any code changes yet
- reviewing a large codebase with no clear scope
In those cases, a code review, security, or design skill is usually a better first tool.
How to Use lesson-learned skill
lesson-learned install context
The repository does not publish a dedicated install command inside skills/lesson-learned/SKILL.md, so installation depends on how you load skills from softaworks/agent-toolkit. If your environment supports adding a skill from that repository, the common pattern is:
npx skills add softaworks/agent-toolkit --skill lesson-learned
If your agent loads skills directly from the repo, use the skill path:
skills/lesson-learned
Either way, treat SKILL.md as the runtime behavior spec, not just the README.
Read these files before first use
For a fast, low-guesswork start, read files in this order:
skills/lesson-learned/SKILL.mdskills/lesson-learned/references/se-principles.mdskills/lesson-learned/references/anti-patterns.mdskills/lesson-learned/README.md
The most important adoption detail is easy to miss: the skill explicitly says not to proceed before loading se-principles.md.
What input the lesson-learned skill needs
lesson-learned usage works best when the model can access:
- a repository with Git history
- the current branch name or a named comparison target like
main - a commit range, commit SHA, branch diff, or working tree diff
- enough file context to inspect the most changed files
Without Git context, the output becomes generic very quickly.
Choose the right analysis scope first
This skill is only as good as the scope you give it. The repository defines practical defaults:
- feature branch: compare branch work to
main - main branch: analyze the last 5 commits
- specific commit: inspect one SHA
- working changes: inspect unstaged and staged diffs
A good lesson-learned guide starts by forcing that choice early. If the scope is fuzzy, the result usually mixes unrelated lessons.
Useful Git commands for better lesson-learned usage
The skill’s own workflow centers on common Git views such as:
git log main..HEAD --onelinegit diff main...HEADgit log --oneline -5git diff HEAD~5..HEADgit show <sha>git diffgit diff --cached
You do not need every command every time. Pick the one that matches the story you want the skill to explain.
Turn a rough request into a strong prompt
Weak prompt:
“Reflect on my recent work.”
Stronger prompt:
“Use lesson-learned on my feature branch versus main. Read references/se-principles.md first. Focus on the 3 files with the largest behavioral changes. Give me 2 lessons grounded in the diff, each with the principle name, code evidence, trade-off, and one thing I should repeat in future PRs.”
Why this works:
- it defines scope
- it names the reference file the skill depends on
- it limits the surface area
- it specifies the output shape
Prompt pattern for lesson-learned for Code Review
lesson-learned for Code Review works best as a reflection layer after normal review, not a replacement for it. A practical prompt is:
“Run lesson-learned on this PR branch against main. Summarize the engineering lesson behind the changes, not just defects. Highlight 1 positive principle demonstrated, 1 anti-pattern risk if relevant, and cite the changed files that support each point.”
This is useful when you want a review comment that teaches, not only blocks.
Suggested output format to ask for
Ask for a compact structure such as:
LessonPrincipleEvidence from changesWhy it mattersNext step
This aligns with the repository’s intent and reduces generic filler.
How to handle large diffs
For large PRs, do not ask the skill to “analyze everything.” Instead:
- identify the most changed files
- cluster changes by theme
- ignore obvious mechanical edits
- ask for 1–3 lessons only
The skill is best at extracting patterns, not exhaustively cataloging every file change.
Common workflow that saves time
A reliable workflow is:
- load
se-principles.md - choose scope
- inspect Git log and diff
- read the most changed files
- optionally load
anti-patterns.md - generate 1–3 lessons with evidence
- refine if the result is too broad or too moralizing
This sequence matters because the principle catalog gives the analysis a stronger vocabulary.
lesson-learned skill FAQ
Is lesson-learned good for beginners?
Yes, if the beginner has real changes to analyze. The skill explains principles through work they just did, which is often easier to absorb than reading theory first. It is less useful for someone with no repo access or no recent diffs.
Is lesson-learned the same as code review?
No. lesson-learned is retrospective and principle-oriented. Code review is usually correctness-, risk-, and maintainability-oriented. There is overlap, but the output goal is different.
Does the lesson-learned skill need Git access?
For strong results, yes. The repository is designed around Git history and diffs. If you only paste a code snippet, the model can still comment on principles, but it is no longer using the skill in its strongest mode.
What makes lesson-learned better than an ordinary prompt?
The advantage is structure: explicit scope selection, required principle references, and a workflow that ties lessons back to concrete code signals. Ordinary prompts often jump straight to generic “best practices.”
Can I use lesson-learned on uncommitted changes?
Yes. The skill supports working changes through git diff and git diff --cached. That is useful before commit when you want to understand the lesson or trade-off in what you are about to ship.
When is lesson-learned a poor fit?
Avoid it when:
- there are no meaningful recent changes
- the diff is mostly generated or formatting noise
- you need defect triage more than reflection
- the branch combines many unrelated tasks
In those cases, narrow the scope or use another skill first.
How to Improve lesson-learned skill
Give lesson-learned a narrower story
The biggest quality lever is scoping. “My last month of work” is too broad. “This refactor that split API calls from UI rendering” is better. Narrow scope leads to lessons with sharper principles and stronger evidence.
Load the principles reference every time
The repository is unusually explicit here: references/se-principles.md should be loaded before analysis. If you skip it, the model may still produce observations, but it is less likely to label patterns consistently or connect them to recognized principles.
Use anti-patterns for balance, not negativity
references/anti-patterns.md is most helpful when the diff contains risk signals such as scattered edits, over-abstraction, or growing “god” modules. Ask for gentle phrasing so the result stays useful instead of sounding punitive.
Ask for evidence tied to changed files
A common failure mode is high-level advice with no proof. Improve lesson-learned output by asking for:
- changed files involved
- what structural change occurred
- what trade-off it implies
- why that maps to a specific principle
Evidence is what separates a real lesson from generic commentary.
Limit the number of lessons
More lessons usually means weaker lessons. Ask for 1–3 takeaways only. That forces prioritization and makes the output more believable and easier to use in PR notes, retros, or coaching.
Tell the skill what kind of lesson you want
You can steer the analysis by adding a lens:
- maintainability lesson
- refactoring lesson
- bug-fix lesson
- design trade-off lesson
- team process lesson tied to code changes
This improves relevance without fighting the skill’s intended workflow.
Correct generic first drafts with a second pass
If the first result is vague, do not rerun from scratch immediately. Instead ask:
- “Tie each lesson to a specific file or diff hunk.”
- “Replace general advice with what this branch actually demonstrates.”
- “Name the principle only if the code evidence clearly supports it.”
- “Drop any lesson that is not visible in the diff.”
This usually upgrades the output faster than broad re-prompting.
Watch for these lesson-learned failure modes
Typical weak outputs include:
- principles named without code evidence
- too many lessons from one diff
- confusing code review defects with learning takeaways
- moralizing language instead of practical trade-offs
- trying to summarize unrelated changes as one lesson
Spotting these early makes iteration straightforward.
Best improvement path for repeated team use
If a team plans to use lesson-learned regularly, standardize a prompt template with:
- scope rule
- comparison target
- maximum number of lessons
- required evidence format
- optional anti-pattern check
That reduces inconsistency and makes the lesson-learned skill much more dependable across PRs and retrospectives.
