A

cross-agent-delegation

by alinaqi

cross-agent-delegation helps Claude Code route work across Kimi and Codex for agent orchestration. It supports automatic tool detection, Codex auto-review after tests, and bounded Kimi delegation based on complexity signals. Use this cross-agent-delegation skill when you want a practical guide to install, activate, and use hidden routing in multi-CLI workflows.

Stars607
Favorites0
Comments0
AddedMay 9, 2026
CategoryAgent Orchestration
Install Command
npx skills add alinaqi/claude-bootstrap --skill cross-agent-delegation
Curation Score

This skill scores 68/100, which means it is worth listing but with clear caveats: directory users get a real delegation workflow with enough structure to justify install, yet they should expect to spend some time understanding the exact routing rules and operational boundaries.

68/100
Strengths
  • Concrete delegation logic for Claude, Kimi, and Codex, including when-to-use and tool detection steps.
  • Substantial, non-placeholder content with valid frontmatter, long body text, and multiple headings that outline an actual workflow.
  • Includes automatic Codex review behavior and complexity-based Kimi delegation, which can reduce manual prompting and improve agent leverage.
Cautions
  • No install command or supporting files, so adoption may require manual setup and interpretation of the SKILL.md instructions.
  • The excerpt shows detailed routing logic but limited evidence of broader examples, references, or reusable resources for quick onboarding.
Overview

Overview of cross-agent-delegation skill

The cross-agent-delegation skill helps Claude Code route work across multiple AI CLI tools instead of handling every step itself. It is built for users who want agent orchestration to be automatic: Claude stays the main interface, while Kimi and Codex are used behind the scenes when available and appropriate.

What this skill is for

Use the cross-agent-delegation skill when you want better task routing for mixed workloads: review after tests, bounded sub-tasks, and delegation decisions based on complexity rather than file count. It is especially relevant for teams already using Claude Code plus other tools and wanting a consistent orchestration layer.

What matters most

The key value is not “more agents” but better decision-making: detect available tools, score the task, delegate when the task fits, and keep review signals flowing back into Claude. That makes cross-agent-delegation for Agent Orchestration useful when you care about reliability, not just speed.

Best-fit users

This skill is a good fit if you manage repo changes, want automated post-test review, or frequently wonder whether a task should be pushed to Kimi or checked by Codex. It is less useful if you only use one model, or if your workflow cannot tolerate hidden routing logic.

How to Use cross-agent-delegation skill

Install and activation context

For cross-agent-delegation install, add the skill to your Claude Bootstrap setup and keep it available in sessions where Claude Code can see multiple CLI tools. The skill is designed to be always loaded when Claude, Kimi, and Codex may all be present, so the install only helps if those executables are actually on PATH.

Read first, then trust the routing

Start with SKILL.md, then inspect the main routing rules and any supporting repo docs if they appear in the branch you are using. In this repository, the most important information is concentrated in one file, so the fastest adoption path is understanding the tool-detection logic, the Codex stop-hook behavior, and the Kimi delegation criteria before you rely on the automation.

How to prompt it well

A strong cross-agent-delegation usage prompt gives Claude a task with clear scope, expected output, and risk level. Good inputs look like: “review this auth change for edge cases and summarize any blocking issues,” or “implement the smallest safe fix for this bug, then verify whether Codex review should run.” Weak inputs like “improve the codebase” leave delegation decisions too vague.

Workflow that improves results

Use the skill when the task can be separated into: detect tools, judge complexity, delegate bounded work, then review results. This workflow is strongest when you already know whether the task is testable, whether the change is localized, and whether a second-pass review would materially reduce risk. If your task is exploratory or highly ambiguous, direct orchestration may be safer than delegation.

cross-agent-delegation skill FAQ

Is cross-agent-delegation beginner-friendly?

Yes, if you are already using Claude Code and want the skill to decide when to involve Kimi or Codex. It is less beginner-friendly if you expect a visible step-by-step manual process, because this skill is designed to run quietly in the background.

When should I not use it?

Do not rely on cross-agent-delegation if you only have one model available, if your environment blocks CLI access, or if you need deterministic single-agent behavior for compliance reasons. It also adds little value when the task is tiny and does not benefit from delegation or automated review.

How is it different from a normal prompt?

A normal prompt asks an assistant to do the work. This cross-agent-delegation skill adds routing behavior: it checks what tools are installed, decides whether a sub-task should be delegated, and can trigger Codex review automatically after tests pass.

Does it replace human judgment?

No. The skill improves orchestration, but you still need to provide the right task boundary and verify the output. The delegation logic is only as good as the task framing and the repo signals it can interpret.

How to Improve cross-agent-delegation skill

Give cleaner task boundaries

The biggest improvement comes from defining whether the task is review, implementation, or investigation. For cross-agent-delegation, tell Claude what “done” looks like, which files or behaviors matter, and whether the work should be optimized for safety, speed, or minimal change.

Provide better complexity signals

This skill scores complexity instead of counting files, so your prompt should surface risk factors that matter: auth paths, test gaps, cross-cutting behavior, or unclear side effects. A better request is “this touches authz checks and needs regression risk assessed” rather than “this touches three files.”

Watch for common failure modes

The main failure mode is over-delegating a task that is actually broad, under-specified, or dependent on repository context. Another issue is assuming Codex auto-review can replace local test quality; it only adds value if the test signal is meaningful and the diff is already in a reviewable state.

Iterate after the first pass

If the first output is too shallow, refine the prompt with the missing constraint: expected test coverage, acceptable tradeoffs, or the exact review question. For cross-agent-delegation guide quality results, treat the first pass as routing plus triage, then ask for a narrower second pass where the delegation target and success criteria are explicit.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...