Aider

Aider skills and workflows surfaced by the site skill importer.

1 skills
A
agent-eval

by affaan-m

agent-eval is a skill for benchmarking coding agents head-to-head on reproducible tasks, comparing pass rate, cost, time, and consistency. Use the agent-eval skill to evaluate Claude Code, Aider, Codex, or another agent in your own repo with clearer evidence than ad hoc prompting.

Model Evaluation
Favorites 0GitHub 156k