Installation¶
liter-llm has prebuilt packages for every supported language. Pick your stack, run one command, and start calling models.
Every package includes prebuilt binaries for Linux (x86_64 / aarch64), macOS (Apple Silicon), and Windows. No Rust toolchain needed unless you're building from source.
CLI / Docker¶
The CLI runs the proxy server and MCP tool server. You don't need it if you're only using a language binding.
Start the proxy:
Or the MCP server:
Proxy Server docs MCP Server docs
Choose your language¶
API Key Setup¶
Set the environment variable for the provider you're calling:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
export GROQ_API_KEY="gsk_..."
export MISTRAL_API_KEY="..."
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
You only need one key
If you only call OpenAI models, only OPENAI_API_KEY is needed. liter-llm resolves the provider from the model prefix (e.g. openai/gpt-4o) and picks the matching key automatically.
You can also pass the key at client construction:
Don't hard-code keys in source files
Use environment variables or a secret manager. Keys passed to LlmClient are wrapped in secrecy::SecretString and never logged.
Verify it works¶
Building from source¶
If prebuilt binaries aren't available for your platform, build from source. You'll need the Rust toolchain (stable 1.75+):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
git clone https://github.com/kreuzberg-dev/liter-llm.git
cd liter-llm
task build
Next steps¶
- Chat & Streaming -- Make your first API call
- MCP & IDE Integration -- Integrate with VS Code, GitHub Copilot, Claude, Cursor
- Provider Registry -- Browse all 142+ supported providers
- Configuration -- Timeouts, retries, base URL overrides