GitHub - 1rgs/claude-code-proxy: Run Claude Code on OpenAI models
Key Points
- 1This project introduces a proxy server that enables Anthropic API clients, such as Claude Code, to interact with OpenAI, Google Gemini, or direct Anthropic models.
- 2It functions by translating Anthropic API requests into a LiteLLM-compatible format for routing to the configured backend, then converting responses back to the Anthropic format for the client.
- 3Users can customize model mappings for "haiku" and "sonnet" through environment variables, setting preferred providers (OpenAI, Google, or Anthropic) and specific substitute models.
This paper describes claude-code-proxy, a proxy server designed to enable Anthropic API clients, such as Claude Code, to interact with OpenAI, Google Gemini, or even direct Anthropic backends. The core functionality of the proxy is to receive requests formatted for the Anthropic API, translate these requests to the appropriate format for the chosen backend model using the LiteLLM library, forward the translated request, receive the response, translate it back into Anthropic API format, and finally return it to the client. This process supports both streaming and non-streaming responses, maintaining compatibility with various Claude clients.
The proxy's operation is configurable through environment variables. Key configurations include OPENAI_API_KEY, GEMINI_API_KEY, and ANTHROPIC_API_KEY for respective service authentication. For Google Gemini, it supports authentication via a static GEMINI_API_KEY or through Application Default Credentials (ADC) by setting and specifying VERTEX_PROJECT and VERTEX_LOCATION.
A central feature is its model mapping logic, primarily for Anthropic's "haiku" and "sonnet" models. The PREFERRED_PROVIDER variable dictates the primary backend:
- If (default),
haikuandsonnetrequests are mapped to user-definedSMALL_MODELandBIG_MODELrespectively, with an automaticopenai/prefix. Default OpenAI models aregpt-4.1-miniforSMALL_MODELandgpt-4.1forBIG_MODEL. - If ,
haikuandsonnetmap toSMALL_MODELandBIG_MODELwith agemini/prefix, provided these models are in the server's known Gemini model list; otherwise, it falls back to OpenAI mapping. Default Google models aregemini-2.0-flashforSMALL_MODELandgemini-2.5-pro-preview-03-25forBIG_MODEL. - If , the proxy acts as a transparent passthrough, sending
haikuandsonnetrequests directly to Anthropic models with ananthropic/prefix, ignoringBIG_MODELandSMALL_MODELsettings. This mode allows for leveraging the proxy's infrastructure (e.g., logging) while using native Anthropic models.
The proxy handles automatic prefixing for supported OpenAI and Gemini models, such as transforming gpt-4o to openai/gpt-4o and gemini-2.5-pro-preview-03-25 to gemini/gemini-2.5-pro-preview-03-25. The BIG_MODEL and SMALL_MODEL variables also receive appropriate prefixes based on their classification as OpenAI or Gemini models.
Setup involves cloning the repository, installing dependencies with uv, configuring environment variables in a .env file copied from .env.example, and running the server via uv run uvicorn server:app or deploying with Docker using docker compose or docker run. Once running, Anthropic clients connect by setting the ANTHROPIC_BASE_URL environment variable to the proxy's address (e.g., http://localhost:8082).