running-claude-code-via-litellm-copilot
by xixu-merunning-claude-code-via-litellm-copilot shows how to route Claude Code through a local LiteLLM proxy to GitHub Copilot, align ANTHROPIC_BASE_URL and model names, verify localhost traffic, and troubleshoot 401/403, model-not-found, and proxy compatibility issues.
This skill scores 78/100, which makes it a solid directory listing candidate for users who specifically want to route Claude Code through a local LiteLLM proxy to GitHub Copilot. The repository gives strong trigger cues, practical setup/troubleshooting intent, and explicit caveats that this is an advanced workaround rather than an official workflow, though install-time execution still requires some manual interpretation because there are no bundled scripts or install command.
- Very clear trigger conditions in frontmatter and "When To Use," including setup and troubleshooting cases like model-not-found, missing localhost traffic, and GitHub 401/403 errors.
- Operational guidance is substantial: the skill explains key compatibility rules such as using ANTHROPIC_BASE_URL, exact ANTHROPIC_MODEL matching, non-empty local auth token behavior, and drop_params: true.
- Trust signals are better than average for a guidance-only skill because it includes a separate doc-verified notes file that distinguishes article-based guidance from LiteLLM-doc-tightened updates.
- Adoption is less turnkey than it could be: SKILL.md has no install command and the repository includes no scripts, rules, or helper assets to reduce setup guesswork.
- The workflow is explicitly described as an advanced workaround with no promise of official GitHub support or long-term compatibility.
Overview of running-claude-code-via-litellm-copilot skill
The running-claude-code-via-litellm-copilot skill helps you set up a specific proxy workflow: keep Claude Code speaking its usual Anthropic-style API, but route the actual requests through a local LiteLLM server that forwards them to GitHub Copilot. This is mainly for people trying to reduce direct Anthropic API usage, test a cheaper setup, or troubleshoot why Claude Code is not reaching the intended backend.
Who this skill is best for
This running-claude-code-via-litellm-copilot skill is best for:
- developers already using Claude Code
- users comfortable editing environment variables and local config files
- people comparing direct Anthropic access vs a local LiteLLM proxy
- anyone debugging
401/403,model not found, or "Claude Code is not hitting localhost" issues
It is not a beginner-first introduction to Claude Code, LiteLLM, or GitHub Copilot.
The real job-to-be-done
Most users do not just want "a summary of the repo." They want a working path to:
- run Claude Code through LiteLLM,
- point LiteLLM at GitHub Copilot,
- make the model names line up exactly,
- verify traffic is actually flowing through the proxy,
- fix the common auth and compatibility failures fast.
That is where this skill is useful.
What makes this skill different
The useful differentiator is that it is guidance for a fragile integration, not a generic prompting layer. It emphasizes practical constraints that usually block adoption:
ANTHROPIC_BASE_URLmust point Claude Code at LiteLLM- Claude Code still expects a non-empty Anthropic token locally
- LiteLLM should use the
github_copilot/<model>provider pattern - Claude Code's
ANTHROPIC_MODELmust match LiteLLMmodel_nameexactly drop_params: truematters for compatibility- first-run GitHub device authorization may appear only after the first real request
- you should confirm success by watching LiteLLM logs, not by assuming config is correct
Read this before deciding to install
Use running-claude-code-via-litellm-copilot if your main question is, "How do I make this proxy arrangement actually work on my machine?" Skip it if you only need ordinary Claude Code usage, direct Anthropic setup, or general Copilot documentation.
How to Use running-claude-code-via-litellm-copilot skill
Install the running-claude-code-via-litellm-copilot skill
Install from the skills repository:
npx skills add https://github.com/xixu-me/skills --skill running-claude-code-via-litellm-copilot
If your environment uses a different skill loader, add the skill from:
https://github.com/xixu-me/skills/tree/main/skills/running-claude-code-via-litellm-copilot
Read these files first
For this running-claude-code-via-litellm-copilot install, start with:
skills/running-claude-code-via-litellm-copilot/SKILL.mdskills/running-claude-code-via-litellm-copilot/references/doc-verified-notes.md
Why this order matters:
SKILL.mdgives the operating workflow and decision rules.references/doc-verified-notes.mdexplains what is anchored to the article and what was tightened against LiteLLM docs, which is important because this setup is compatibility-sensitive.
Know the minimum setup pieces
A successful setup usually needs four things aligned:
- Claude Code pointed at LiteLLM via
ANTHROPIC_BASE_URL - a non-empty local
ANTHROPIC_API_KEYor equivalent token value so Claude Code will run - LiteLLM configured to use
github_copilot/<model> - exact model-name matching between Claude Code and LiteLLM
If any one of these is off, the workflow often fails in a confusing way.
What inputs the skill needs from you
To use the running-claude-code-via-litellm-copilot usage guidance well, provide:
- your OS and shell
- whether Claude Code is already installed and working
- whether LiteLLM is already installed and how you start it
- your current
ANTHROPIC_BASE_URL - your intended Copilot-backed model name
- the exact error text if the setup is failing
- whether you are willing to edit
~/.claude/settings.jsonor shell profile files
Those details let the skill adapt commands instead of guessing.
Turn a rough goal into a strong prompt
Weak prompt:
Help me use Claude Code with LiteLLM and Copilot.
Stronger prompt:
I want Claude Code to send requests to a local LiteLLM proxy on macOS zsh, then forward to GitHub Copilot. Show the minimum config, the environment variables I need, how to set ANTHROPIC_BASE_URL, how to choose the exact ANTHROPIC_MODEL value so it matches LiteLLM model_name, and how to verify traffic in LiteLLM logs before editing persistent files.
Why this is better:
- names the OS and shell
- asks for the exact configuration chain
- calls out the model-matching issue up front
- requests safe verification before persistent edits
Suggested workflow for first-time setup
Use this order instead of editing everything at once:
- inspect current Claude Code and LiteLLM setup
- choose one target model
- configure LiteLLM with
github_copilot/<model> - set
drop_params: trueif needed for Anthropic-shaped request compatibility - point Claude Code at LiteLLM using
ANTHROPIC_BASE_URL - set
ANTHROPIC_MODELto exactly match LiteLLMmodel_name - run one small request
- watch LiteLLM logs
- complete GitHub device authorization if prompted
- only then make persistent config changes
This reduces the chance of hiding the real failure behind multiple simultaneous edits.
The most important compatibility rule
In practice, the highest-value rule in the repository is this: Claude Code ANTHROPIC_MODEL must match LiteLLM model_name exactly.
Do not treat model naming as approximate. A close-looking mismatch is enough to break routing and produce misleading errors.
How to verify the proxy is really working
Do not stop at "the command ran." Verify all of the following:
- Claude Code is targeting your local
ANTHROPIC_BASE_URL - LiteLLM receives the request in logs
- the request is forwarded through the GitHub Copilot provider path
- the response returns through LiteLLM rather than direct Anthropic access
If there is no localhost traffic, the issue is usually earlier than Copilot auth.
Common failure patterns this skill helps with
This running-claude-code-via-litellm-copilot guide is especially useful for:
model not founddue to mismatched model names401or403during GitHub Copilot auth- no traffic reaching LiteLLM
- Claude Code expecting an Anthropic token even though LiteLLM is the real backend
- compatibility issues caused by unsupported request parameters
These are exactly the kinds of problems where a generic prompt usually wastes time.
Use explanation mode vs execution mode
The upstream skill is explicit about two modes:
- explanation mode: give the smallest correct set of commands, files, and checks
- execution mode: inspect the active machine, adapt to shell and OS, and pause before persistent edits
That distinction matters. If you want hands-on setup help, say so clearly. If you only want a plan, ask for a non-destructive walkthrough first.
A practical prompt you can reuse
Use a prompt like this when invoking the skill:
Use the running-claude-code-via-litellm-copilot skill. I want a non-destructive setup plan for routing Claude Code through a local LiteLLM proxy to GitHub Copilot on Ubuntu bash. Please inspect the likely config points, show the exact variables and file paths to check, explain the github_copilot/<model> naming rule, call out where ANTHROPIC_MODEL must match LiteLLM model_name exactly, and give a verification checklist using LiteLLM logs before any persistent edits.
running-claude-code-via-litellm-copilot skill FAQ
Is running-claude-code-via-litellm-copilot suitable for beginners?
Usually only if you are comfortable with local proxies, env vars, and config debugging. The skill is well-targeted, but the workflow itself is still advanced and can fail for several small reasons.
What does this skill do better than a normal prompt?
A normal prompt may explain the idea. The running-claude-code-via-litellm-copilot skill is stronger when you need the exact routing assumptions, first-line troubleshooting rules, and setup order that prevent dead ends.
Does this skill guarantee GitHub Copilot support?
No. The source material frames this as a workaround, not an officially guaranteed GitHub workflow. Use it as practical implementation guidance, not as a promise of long-term compatibility.
When should I not use running-claude-code-via-litellm-copilot?
Do not use it if:
- you are fine with direct Anthropic setup
- you do not want a local proxy in the loop
- you need an officially supported enterprise integration path
- you are looking for general Claude Code onboarding rather than this specific routing pattern
Is this mainly about saving money?
Cost reduction is one motivation, but not the only one. Many users need it for routing control, backend substitution, or debugging why Claude Code is hitting the wrong endpoint.
What is the most likely setup blocker?
The top blocker is exact model-name mismatch between Claude Code and LiteLLM. After that, auth issues and missing localhost traffic are the next likely causes.
Does the skill include extra scripts or automation?
No major helper scripts are surfaced in the repository snapshot. This is a guidance-heavy skill, so expect to apply the instructions manually to your own machine and config.
How to Improve running-claude-code-via-litellm-copilot skill
Start with your current state, not the target state
To get better results from running-claude-code-via-litellm-copilot, tell the agent what already exists:
- installed tools
- current config files
- current env vars
- exact command you ran
- exact error output
This prevents the assistant from giving a clean-room setup when you actually need troubleshooting.
Ask for one-model setup first
Do not begin with multiple models or a broad "make everything work" request. Ask for one model, one endpoint, and one validation step. That narrows failures and makes logs interpretable.
Include the exact model strings
When asking for help, paste both:
- the LiteLLM
model_name - the Claude Code
ANTHROPIC_MODEL
This is the fastest way to catch the most common breakage.
Request a verification-first plan
A strong request is:
Before suggesting persistent edits, give me a temporary test setup and a checklist to confirm Claude Code is reaching LiteLLM and LiteLLM is forwarding to GitHub Copilot.
That improves safety and reduces unnecessary config churn.
Share logs, not just symptoms
Bad:
It does not work.
Better:
Claude Code returns model not found. LiteLLM logs show no localhost request after I set ANTHROPIC_BASE_URL to ...
Best:
Claude Code returns model not found. My ANTHROPIC_MODEL is X, LiteLLM model_name is Y, and LiteLLM logs show the request arriving but failing after provider routing.
The skill performs better when you supply evidence from the failing layer.
Ask the agent to separate root cause from fix
This setup often produces stacked errors. Request output in this format:
- likely root cause
- exact file or variable to inspect
- minimal fix
- verification step
That structure makes the advice easier to execute and audit.
Use the reference notes when behavior seems outdated
If the guidance seems to conflict with what you are seeing, point the agent back to:
references/doc-verified-notes.md
That file is where the repository clarifies article-based guidance vs currently verified LiteLLM behavior, including the github_copilot/<model> naming rule.
Improve after the first working request
Once the first request succeeds, then iterate on:
- persistent config placement
- shell profile cleanup
- safer defaults
- model switching
- clearer local documentation for your team
Do not optimize before you have confirmed end-to-end traffic.
Watch for these failure modes during iteration
The biggest repeat mistakes are:
- changing several config files at once
- assuming approximate model names are fine
- forgetting that Claude Code still expects a non-empty Anthropic token locally
- not checking LiteLLM logs
- making persistent edits before a temporary test succeeds
Best way to get higher-quality output from this skill
The best prompt pattern for running-claude-code-via-litellm-copilot for Skill Installation is:
Use the running-claude-code-via-litellm-copilot skill to troubleshoot my current setup. I am on [OS/shell]. Claude Code is configured with [values]. LiteLLM is started with [method]. My intended provider route is github_copilot/[model]. My ANTHROPIC_MODEL is [value]. Here are the logs and the exact error. Give me the smallest fix first, then a verification step, and pause before suggesting persistent edits.
That gives the skill the context it needs to produce install-quality, machine-relevant guidance instead of generic setup prose.
