Supported Providers¶
Liter-LLM supports 142 providers out of the box. Route requests to any provider using the provider/model prefix convention -- for example, openai/gpt-4o routes to OpenAI and anthropic/claude-3-opus routes to Anthropic. No extra configuration is needed beyond setting the provider's API key.
| Provider | Prefix | Chat | Embeddings | Image | Audio | Moderation |
|---|---|---|---|---|---|---|
| A2A | a2a/ |
-- | -- | -- | -- | |
| Abliteration | abliteration/ |
-- | -- | -- | -- | |
| AI/ML API | aiml/ |
-- | -- | |||
| AI21 | ai21/ |
-- | -- | -- | -- | |
| AI21 Chat | ai21_chat/ |
-- | -- | -- | -- | |
| Amazon Nova | amazon_nova/ |
-- | -- | -- | -- | |
| Anthropic | anthropic/ |
-- | -- | -- | -- | |
| Anthropic Text | anthropic_text/ |
-- | -- | -- | -- | |
| Apertis | apertis/ |
-- | -- | -- | ||
| AssemblyAI | assemblyai/ |
-- | -- | -- | ||
| Auto Router | auto_router/ |
-- | -- | -- | -- | |
| AWS - Bedrock | bedrock/ |
-- | -- | -- | ||
| AWS - Polly | aws_polly/ |
-- | -- | -- | -- | |
| AWS - Sagemaker | sagemaker/ |
-- | -- | -- | ||
| AWS S3 Vectors | s3_vectors/ |
-- | -- | -- | -- | -- |
| Azure | azure/ |
|||||
| Azure AI | azure_ai/ |
|||||
| Azure AI Document Intelligence | azure_ai/doc-intelligence/ |
-- | -- | -- | -- | -- |
| Azure AI Foundry Agents | azure_ai/agents/ |
-- | -- | -- | -- | |
| Azure Text | azure_text/ |
-- | -- | |||
| Baseten | baseten/ |
-- | -- | -- | -- | |
| Brave Search | brave/ |
-- | -- | -- | -- | -- |
| Bytez | bytez/ |
-- | -- | -- | -- | |
| Cerebras | cerebras/ |
-- | -- | -- | -- | |
| ChatGPT Subscription | chatgpt/ |
-- | -- | -- | -- | |
| Chutes | chutes/ |
-- | -- | -- | ||
| Clarifai | clarifai/ |
-- | -- | -- | -- | |
| Cloudflare AI Workers | cloudflare/ |
-- | -- | -- | -- | |
| Codestral | codestral/ |
-- | -- | -- | -- | |
| Cohere | cohere/ |
-- | -- | -- | ||
| Cohere Chat | cohere_chat/ |
-- | -- | -- | -- | |
| CometAPI | cometapi/ |
-- | -- | -- | ||
| CompactifAI | compactifai/ |
-- | -- | -- | -- | |
| Cursor BYOK | cursor/ |
-- | -- | -- | -- | |
| Custom | custom/ |
-- | -- | -- | -- | |
| Custom OpenAI | custom_openai/ |
-- | -- | |||
| Dashscope | dashscope/ |
-- | -- | -- | -- | |
| Databricks | databricks/ |
-- | -- | -- | -- | |
| DataForSEO | dataforseo/ |
-- | -- | -- | -- | -- |
| DataRobot | datarobot/ |
-- | -- | -- | -- | |
| Deepgram | deepgram/ |
-- | -- | -- | ||
| DeepInfra | deepinfra/ |
-- | -- | -- | -- | |
| Deepseek | deepseek/ |
-- | -- | -- | -- | |
| Docker Model Runner | docker_model_runner/ |
-- | -- | -- | -- | |
| DuckDuckGo | duckduckgo/ |
-- | -- | -- | -- | -- |
| ElevenLabs | elevenlabs/ |
-- | -- | -- | ||
| Empower | empower/ |
-- | -- | -- | -- | |
| Exa AI | exa_ai/ |
-- | -- | -- | -- | -- |
| Fal AI | fal_ai/ |
-- | -- | -- | ||
| Featherless AI | featherless_ai/ |
-- | -- | -- | -- | |
| Firecrawl | firecrawl/ |
-- | -- | -- | -- | -- |
| Fireworks AI | fireworks_ai/ |
-- | -- | -- | -- | |
| FriendliAI | friendliai/ |
-- | -- | -- | -- | |
| Galadriel | galadriel/ |
-- | -- | -- | -- | |
| GigaChat | gigachat/ |
-- | -- | -- | ||
| GitHub Copilot | github_copilot/ |
-- | -- | -- | -- | |
| GitHub Models | github/ |
-- | -- | -- | -- | |
| GMI Cloud | gmi/ |
-- | -- | -- | -- | |
| Google - Vertex AI | vertex_ai/ |
-- | ||||
| Google AI Studio - Gemini | gemini/ |
-- | -- | -- | -- | |
| Google PSE | google_pse/ |
-- | -- | -- | -- | -- |
| GradientAI | gradient_ai/ |
-- | -- | -- | -- | |
| Groq AI | groq/ |
-- | -- | -- | -- | |
| Helicone | helicone/ |
-- | -- | -- | -- | |
| Heroku | heroku/ |
-- | -- | -- | -- | |
| Hosted VLLM | hosted_vllm/ |
-- | -- | -- | ||
| Huggingface | huggingface/ |
-- | -- | -- | ||
| Hyperbolic | hyperbolic/ |
-- | -- | -- | -- | |
| IBM - Watsonx.ai | watsonx/ |
-- | -- | |||
| Infinity | infinity/ |
-- | -- | -- | -- | |
| Jina AI | jina_ai/ |
-- | -- | -- | -- | |
| Lambda AI | lambda_ai/ |
-- | -- | -- | -- | |
| LangGraph | langgraph/ |
-- | -- | -- | -- | |
| Lemonade | lemonade/ |
-- | -- | -- | -- | |
| Linkup | linkup/ |
-- | -- | -- | -- | -- |
| LiteLLM Proxy | litellm_proxy/ |
-- | -- | |||
| Llamafile | llamafile/ |
-- | -- | -- | -- | |
| LlamaGate | llamagate/ |
-- | -- | -- | -- | |
| LM Studio | lm_studio/ |
-- | -- | -- | -- | |
| Manus | manus/ |
-- | -- | -- | -- | |
| Maritalk | maritalk/ |
-- | -- | -- | -- | |
| Meta - Llama API | meta_llama/ |
-- | -- | -- | -- | |
| Milvus | milvus/ |
-- | -- | -- | -- | -- |
| Minimax | minimax/ |
-- | -- | -- | -- | |
| Mistral AI API | mistral/ |
-- | -- | -- | ||
| Moonshot | moonshot/ |
-- | -- | -- | -- | |
| Morph | morph/ |
-- | -- | -- | -- | |
| NanoGPT | nanogpt/ |
-- | -- | -- | ||
| Nebius AI Studio | nebius/ |
-- | -- | -- | ||
| NLP Cloud | nlp_cloud/ |
-- | -- | -- | -- | |
| Novita AI | novita/ |
-- | -- | -- | -- | |
| Nscale | nscale/ |
-- | -- | -- | -- | |
| Nvidia NIM | nvidia_nim/ |
-- | -- | -- | -- | |
| OCI | oci/ |
-- | -- | -- | -- | |
| Ollama | ollama/ |
-- | -- | -- | ||
| Ollama Chat | ollama_chat/ |
-- | -- | -- | -- | |
| Oobabooga | oobabooga/ |
-- | -- | |||
| OpenAI | openai/ |
|||||
| OpenAI-like | openai_like/ |
-- | -- | -- | -- | |
| OpenRouter | openrouter/ |
-- | -- | -- | ||
| OVHCloud AI Endpoints | ovhcloud/ |
-- | -- | -- | ||
| Parallel AI | parallel_ai/ |
-- | -- | -- | -- | -- |
| Perplexity AI | perplexity/ |
-- | -- | -- | -- | |
| Petals | petals/ |
-- | -- | -- | -- | |
| PG Vector | pg_vector/ |
-- | -- | -- | -- | -- |
| Poe | poe/ |
-- | -- | -- | ||
| Predibase | predibase/ |
-- | -- | -- | -- | |
| PublicAI | publicai/ |
-- | -- | -- | -- | |
| Pydantic AI Agents | pydantic_ai_agents/ |
-- | -- | -- | -- | -- |
| RAGFlow | ragflow/ |
-- | -- | -- | -- | |
| Recraft | recraft/ |
-- | -- | -- | -- | |
| Replicate | replicate/ |
-- | -- | -- | -- | |
| RunwayML | runwayml/ |
-- | -- | -- | ||
| Sagemaker Chat | sagemaker_chat/ |
-- | -- | -- | -- | |
| Sambanova | sambanova/ |
-- | -- | -- | -- | |
| SAP Generative AI Hub | sap/ |
-- | -- | -- | -- | |
| Sarvam | sarvam/ |
-- | -- | -- | -- | |
| Scaleway | scaleway/ |
-- | -- | -- | -- | |
| SearXNG | searxng/ |
-- | -- | -- | -- | -- |
| Serper | serper/ |
-- | -- | -- | -- | -- |
| Snowflake | snowflake/ |
-- | -- | -- | -- | |
| Stability AI | stability/ |
-- | -- | -- | -- | |
| Synthetic | synthetic/ |
-- | -- | -- | ||
| Tavily | tavily/ |
-- | -- | -- | -- | -- |
| Text Completion Codestral | text-completion-codestral/ |
-- | -- | -- | -- | |
| Text Completion OpenAI | text-completion-openai/ |
-- | -- | |||
| Together AI | together_ai/ |
-- | -- | -- | -- | |
| Topaz | topaz/ |
-- | -- | -- | -- | -- |
| Triton | triton/ |
-- | -- | -- | -- | |
| V0 | v0/ |
-- | -- | -- | -- | |
| Venice.ai | venice/ |
-- | -- | -- | -- | |
| Vercel AI Gateway | vercel_ai_gateway/ |
-- | -- | -- | -- | |
| Vertex AI Agent Engine | vertex_ai/agent_engine/ |
-- | -- | -- | -- | |
| VLLM | vllm/ |
-- | -- | -- | ||
| Volcengine | volcengine/ |
-- | -- | -- | -- | |
| Voyage AI | voyage/ |
-- | -- | -- | -- | |
| WandB Inference | wandb/ |
-- | -- | -- | -- | |
| Watsonx Text | watsonx_text/ |
-- | -- | -- | -- | |
| xAI | xai/ |
-- | -- | -- | -- | |
| Xiaomi Mimo | xiaomi_mimo/ |
-- | -- | -- | -- | |
| Xinference | xinference/ |
-- | -- | -- | -- | |
| Z.AI | zai/ |
-- | -- | -- | -- |
142 providers total.
Usage¶
Use any provider by prefixing the model name with the provider's routing prefix:
import asyncio
import os
from liter_llm import LlmClient
async def main() -> None:
# OpenAI
client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])
response = await client.chat(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Anthropic
client = LlmClient(api_key=os.environ["ANTHROPIC_API_KEY"])
response = await client.chat(
model="anthropic/claude-3-opus",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Groq
client = LlmClient(api_key=os.environ["GROQ_API_KEY"])
response = await client.chat(
model="groq/llama3-70b",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
asyncio.run(main())
Custom Providers¶
Any OpenAI-compatible API can be used as a custom provider by setting the base URL at client construction:
import asyncio
from liter_llm import LlmClient
async def main() -> None:
client = LlmClient(
api_key="my-key",
base_url="https://my-api.example.com/v1",
)
response = await client.chat(
model="custom/my-model",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
asyncio.run(main())
Provider Registry¶
The full provider registry with base URLs, auth configuration, and model mappings is available at schemas/providers.json.