Skip to main content

[OLD PROXY ๐Ÿ‘‰ NEW proxy here] Local LiteLLM Proxy Server

A fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

info

Docs outdated. New docs ๐Ÿ‘‰ here

Usageโ€‹

pip install 'litellm[proxy]'
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

Testโ€‹

In a new shell, run:

$ litellm --test

Replace openai baseโ€‹

import openai 

openai.api_base = "http://0.0.0.0:8000"

print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

Other supported models:โ€‹

Assuming you're running vllm locally

$ litellm --model vllm/facebook/opt-125m

Tutorial: Use with Multiple LLMs + LibreChat/Chatbot-UI/Auto-Gen/ChatDev/Langroid,etc.โ€‹

Replace openai base:

import openai 

openai.api_key = "any-string-here"
openai.api_base = "http://0.0.0.0:8080" # your proxy url

# call openai
response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hey"}])

print(response)

# call cohere
response = openai.ChatCompletion.create(model="command-nightly", messages=[{"role": "user", "content": "Hey"}])

print(response)

Local Proxyโ€‹

Here's how to use the local proxy to test codellama/mistral/etc. models for different github repos

pip install litellm
$ ollama pull codellama # OUR Local CodeLlama  

$ litellm --model ollama/codellama --temperature 0.3 --max_tokens 2048

Tutorial: Use with Multiple LLMs + Aider/AutoGen/Langroid/etc.โ€‹

$ litellm

#INFO: litellm proxy running on http://0.0.0.0:8000

Send a request to your proxyโ€‹

import openai 

openai.api_key = "any-string-here"
openai.api_base = "http://0.0.0.0:8080" # your proxy url

# call gpt-3.5-turbo
response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hey"}])

print(response)

# call ollama/llama2
response = openai.ChatCompletion.create(model="ollama/llama2", messages=[{"role": "user", "content": "Hey"}])

print(response)
note

Contribute Using this server with a project? Contribute your tutorial here!

Advancedโ€‹

Logsโ€‹

$ litellm --logs

This will return the most recent log (the call that went to the LLM API + the received response).

All logs are saved to a file called api_logs.json in the current directory.

Configure Proxyโ€‹

If you need to:

  • save API keys
  • set litellm params (e.g. drop unmapped params, set fallback models, etc.)
  • set model-specific params (max tokens, temperature, api base, prompt template)

You can do set these just for that session (via cli), or persist these across restarts (via config file).

Save API Keysโ€‹

$ litellm --api_key OPENAI_API_KEY=sk-...

LiteLLM will save this to a locally stored config file, and persist this across sessions.

LiteLLM Proxy supports all litellm supported api keys. To add keys for a specific provider, check this list:

$ litellm --add_key HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]

E.g.: Set api base, max tokens and temperature.

For that session:

litellm --model ollama/llama2 \
--api_base http://localhost:11434 \
--max_tokens 250 \
--temperature 0.5

# OpenAI-compatible server running on http://0.0.0.0:8000

Performanceโ€‹

We load-tested 500,000 HTTP connections on the FastAPI server for 1 minute, using wrk.

There are our results:

Thread Stats   Avg      Stdev     Max   +/- Stdev
Latency 156.38ms 25.52ms 361.91ms 84.73%
Req/Sec 13.61 5.13 40.00 57.50%
383625 requests in 1.00m, 391.10MB read
Socket errors: connect 0, read 1632, write 1, timeout 0

Support/ talk with foundersโ€‹