GitHub - 1rgs/claude-code-proxy: Run Claude Code on OpenAI models
Service

GitHub - 1rgs/claude-code-proxy: Run Claude Code on OpenAI models

1rgs
2025.06.08
ยทGitHubยทby Anonymous
#LLM#Proxy#OpenAI#Gemini#Anthropic

Key Points

  • 1This project introduces a proxy server that enables Anthropic API clients, such as Claude Code, to interact with OpenAI, Google Gemini, or direct Anthropic models.
  • 2It functions by translating Anthropic API requests into a LiteLLM-compatible format for routing to the configured backend, then converting responses back to the Anthropic format for the client.
  • 3Users can customize model mappings for "haiku" and "sonnet" through environment variables, setting preferred providers (OpenAI, Google, or Anthropic) and specific substitute models.

This paper describes claude-code-proxy, a proxy server designed to enable Anthropic API clients, such as Claude Code, to interact with OpenAI, Google Gemini, or even direct Anthropic backends. The core functionality of the proxy is to receive requests formatted for the Anthropic API, translate these requests to the appropriate format for the chosen backend model using the LiteLLM library, forward the translated request, receive the response, translate it back into Anthropic API format, and finally return it to the client. This process supports both streaming and non-streaming responses, maintaining compatibility with various Claude clients.

The proxy's operation is configurable through environment variables. Key configurations include OPENAI_API_KEY, GEMINI_API_KEY, and ANTHROPIC_API_KEY for respective service authentication. For Google Gemini, it supports authentication via a static GEMINI_API_KEY or through Application Default Credentials (ADC) by setting USEVERTEXAUTH=trueUSE_VERTEX_AUTH=true and specifying VERTEX_PROJECT and VERTEX_LOCATION.

A central feature is its model mapping logic, primarily for Anthropic's "haiku" and "sonnet" models. The PREFERRED_PROVIDER variable dictates the primary backend:

  • If PREFERREDPROVIDER=openaiPREFERRED_PROVIDER=openai (default), haiku and sonnet requests are mapped to user-defined SMALL_MODEL and BIG_MODEL respectively, with an automatic openai/ prefix. Default OpenAI models are gpt-4.1-mini for SMALL_MODEL and gpt-4.1 for BIG_MODEL.
  • If PREFERREDPROVIDER=googlePREFERRED_PROVIDER=google, haiku and sonnet map to SMALL_MODEL and BIG_MODEL with a gemini/ prefix, provided these models are in the server's known Gemini model list; otherwise, it falls back to OpenAI mapping. Default Google models are gemini-2.0-flash for SMALL_MODEL and gemini-2.5-pro-preview-03-25 for BIG_MODEL.
  • If PREFERREDPROVIDER=anthropicPREFERRED_PROVIDER=anthropic, the proxy acts as a transparent passthrough, sending haiku and sonnet requests directly to Anthropic models with an anthropic/ prefix, ignoring BIG_MODEL and SMALL_MODEL settings. This mode allows for leveraging the proxy's infrastructure (e.g., logging) while using native Anthropic models.

The proxy handles automatic prefixing for supported OpenAI and Gemini models, such as transforming gpt-4o to openai/gpt-4o and gemini-2.5-pro-preview-03-25 to gemini/gemini-2.5-pro-preview-03-25. The BIG_MODEL and SMALL_MODEL variables also receive appropriate prefixes based on their classification as OpenAI or Gemini models.

Setup involves cloning the repository, installing dependencies with uv, configuring environment variables in a .env file copied from .env.example, and running the server via uv run uvicorn server:app or deploying with Docker using docker compose or docker run. Once running, Anthropic clients connect by setting the ANTHROPIC_BASE_URL environment variable to the proxy's address (e.g., http://localhost:8082).