optimize
by pbakausThe optimize skill helps diagnose and improve UI performance across load time, rendering, animations, images, and bundle size. Use it to measure bottlenecks, prioritize fixes, and verify gains for web apps and interactive frontends.
This skill scores 68/100, which means it clears the bar for listing as a useful but somewhat lightweight optimization guide. Directory users get a clear trigger surface and practical performance topics to inspect, but should expect to supply their own tooling, measurements, and project-specific execution details.
- Strong triggerability: the description clearly maps to user intents like slow, laggy, janky, bundle size, and load time.
- Covers a real workflow: it tells agents to measure first, identify bottlenecks, and optimize across images, rendering, animations, and bundle size.
- Includes concrete optimization examples such as responsive images and modern formats, giving more actionable guidance than a generic 'make it faster' prompt.
- Operational clarity is limited by lack of support files, scripts, or repo-specific references, so agents must infer how to measure and apply fixes in the target project.
- The skill appears advisory rather than executable: no install command, quick-start procedure, or validation checklist is provided beyond general 'measure before and after' guidance.
Overview of optimize skill
What the optimize skill does
The optimize skill is a focused performance optimization guide for UI work. It helps an agent diagnose why an interface feels slow, laggy, janky, or heavy, then propose targeted fixes across loading, rendering, animations, images, and bundle size. If you want optimize for Performance Optimization rather than general code review, this skill is a good fit.
Who should install optimize
Install optimize if you build web apps, product UIs, landing pages, dashboards, or interactive frontends and need practical help turning “this feels slow” into measurable fixes. It is most useful for developers, product engineers, and AI-assisted coding workflows where performance problems are visible but root causes are unclear.
Real job-to-be-done
Users typically do not want theory; they want to know:
- what is actually slow
- how to measure it
- what likely causes it
- which fixes matter first
- how to verify improvement after changes
The optimize skill is built around that workflow, not around generic performance tips.
Why this skill is different from a normal prompt
A plain prompt often jumps straight to guesses. optimize is better because it starts with measurement, bottleneck isolation, and prioritization before suggesting fixes. That reduces premature optimization and makes the output more actionable for real projects.
What is and is not included
This skill is narrow and useful: it gives a structured path for frontend performance diagnosis and remediation. It does not ship scripts, benchmarks, or framework-specific automation in this repo snapshot, so expect guidance and decision support rather than turnkey tooling.
How to Use optimize skill
optimize install and invocation
Install the skill with:
npx skills add https://github.com/pbakaus/impeccable --skill optimize
Then invoke it by asking your agent to use optimize with a target, such as a page, flow, component, or app area:
Use optimize on our homepage load performanceUse optimize for checkout jank on mobileUse optimize on the dashboard bundle size
Best install context before first run
The repo evidence shows only SKILL.md, so your practical setup matters more than repo exploration. Before using optimize, gather:
- the affected URL, route, or component
- device context: desktop, low-end mobile, slow network, specific browser
- symptoms: slow load, input lag, dropped frames, CLS, oversized bundle
- any measurements you already have from Lighthouse, DevTools, RUM, or profiler output
Without this context, the skill can still help, but recommendations will be broader and less reliable.
Read this file first
Start with:
SKILL.md
Because this skill is a single-file guide, there are no supporting rules or resources to study first. That is good for quick adoption, but it also means you should provide more project-specific evidence in your prompt.
What input optimize needs to work well
Strong optimize usage depends on concrete evidence. The best inputs include:
- current metrics: LCP, INP/FID, CLS, FCP, TTI, FPS, memory, CPU
- scope: one page, one interaction, one animation, or one build artifact
- suspected cause, if any
- constraints: no framework migration, no CDN change, no image pipeline change, etc.
- success target: “reduce LCP below 2.5s on mobile” is better than “make it faster”
Turn a rough request into a strong optimize prompt
Weak:
Make my app faster
Stronger:
Use optimize on our React product page. Mobile Lighthouse shows LCP 4.1s, CLS 0.18, bundle is 1.2MB JS, hero image is 2.4MB, and filtering interactions feel laggy on low-end Android. Prioritize fixes by impact and implementation cost, explain likely causes, and suggest how to re-measure after each change.
Why this works:
- it defines the target
- it includes measurements
- it narrows the platform
- it asks for prioritization, not a dump of tips
A practical optimize workflow
A good optimize guide usually follows this sequence:
- Measure baseline.
- Separate load issues from runtime issues.
- Identify the largest bottleneck.
- Apply the highest-impact fix first.
- Re-measure.
- Only then move to secondary improvements.
This mirrors the skill’s strongest advice: measure before and after, and do not optimize blindly.
What kinds of problems optimize handles best
The skill is especially useful for:
- slow initial page load
- image-heavy pages
- large JavaScript or CSS payloads
- sluggish interactions
- animation stutter
- layout thrashing and reflow-driven jank
- too many network requests
These are the areas most clearly covered in the source material.
What output to ask the skill for
To improve decision quality, ask optimize for a structured response:
- diagnosis
- ranked bottlenecks
- recommended fixes
- expected impact
- tradeoffs
- verification plan
That format is more useful than “list optimization ideas,” especially when you need to decide what to ship first.
Tips that materially improve optimize usage
Ask the skill to distinguish between:
- lab metrics vs real-user symptoms
- desktop vs mobile performance
- initial load vs repeat visits
- network-bound vs CPU-bound problems
- one-time expensive work vs repeated expensive work
These distinctions often change the correct fix. For example, image compression helps network-heavy pages, while layout thrash fixes help runtime smoothness.
Common fit constraints
This skill is guidance-first, not ecosystem-deep. If you need exact framework internals, custom bundler commands, or automated patching, pair optimize with repo-specific context from your own codebase. The skill helps most when it has enough evidence to reason from, but not when you expect it to know your stack by default.
optimize skill FAQ
Is optimize good for beginners?
Yes, if you can provide symptoms and metrics. The skill’s structure is beginner-friendly because it starts with measurement and prioritization. But absolute beginners may still need help collecting DevTools or Lighthouse data before the best recommendations appear.
When should I use optimize instead of a normal coding prompt?
Use optimize when performance is the main job, not a side note. If the task is “fix jank,” “improve load time,” or “reduce bundle size,” the skill is better than a generic prompt because it is built around diagnosis-first performance work.
Does optimize replace profiling tools?
No. The optimize skill complements tools like Lighthouse and browser profilers; it does not replace them. It helps interpret findings, prioritize fixes, and convert raw signals into an action plan.
Is optimize only for web performance?
Based on the source, it is primarily aimed at UI and web-style performance concerns: Core Web Vitals, images, network payloads, rendering, and animations. It is not the right first choice for backend query tuning or infrastructure latency.
When is optimize a poor fit?
Skip optimize if:
- you do not have a specific UI target
- the problem is backend-only
- you want premature “best practices” without measurement
- you need framework-specific implementation details with no project context
In those cases, the output may be too generic to drive confident changes.
Does the repo include extra references or automation?
Not in the current snapshot. The repository evidence shows a single SKILL.md and no support folders. That keeps install simple, but it means your prompt quality and local measurements play a bigger role in results.
How to Improve optimize skill
Give optimize better evidence, not broader goals
The fastest way to improve optimize output is to supply sharper inputs:
- exact page or route
- metric values
- screenshots or copied profiler findings
- affected device/network
- recent regressions or suspected changes
“Homepage is slow” produces weaker advice than “mobile LCP regressed from 2.6s to 4.0s after adding autoplay video and a new analytics tag.”
Ask for prioritization by impact and effort
Performance work becomes noisy fast. Tell optimize to rank recommendations by:
- impact on user experience
- confidence level
- implementation effort
- risk of regression
This helps prevent low-value cleanup from crowding out major wins like oversized images or excessive JavaScript.
Separate loading fixes from rendering fixes
A common failure mode is mixing all performance advice together. Improve results by asking optimize for Performance Optimization in one lane at a time:
- loading: images, payloads, request count, code splitting
- rendering: reflows, paint cost, animation strategy, main-thread work
That separation usually produces clearer next steps.
Provide constraints early
Tell the skill what you cannot change:
- no CDN migration
- no framework rewrite
- no image format change in this sprint
- must preserve animation behavior
- bundle must stay compatible with legacy browser targets
Constraints force more realistic recommendations and reduce wasted output.
Ask optimize to explain why each fix matters
If the first answer feels generic, ask for:
- the bottleneck each fix addresses
- the metric it should improve
- how to validate the gain
- any tradeoffs, like CPU vs bandwidth or smoothness vs fidelity
This makes the advice easier to trust and easier to implement correctly.
Iterate after the first pass
The best optimize guide workflow is iterative:
- get initial diagnosis
- apply one or two top fixes
- collect new measurements
- run
optimizeagain with before/after data
That turns the skill from a one-shot suggestion engine into a practical optimization loop.
Common failure modes to avoid
Results are weaker when users:
- ask for “all performance improvements”
- provide no metrics
- mix backend, infra, and frontend issues in one request
- omit device and network context
- ask for fixes before confirming the bottleneck
The skill is strongest when the problem is bounded and evidence-backed.
How to get more implementation-ready output
If you want changes you can act on quickly, ask for:
- a top-3 fix list
- code-level examples for your stack
- a measurement checklist
- a rollback or verification plan
- “what to do first this week” rather than a full audit
That framing improves adoption because it turns advice into a shipping plan.
