Installation¶
LLMRouter can be installed from PyPI, or from source if you want the example configs and data in the repository.
Requirements¶
- Python 3.10+
- Optional: CUDA-capable GPU for faster training
Note
The documentation site is built from the website branch, but the runnable code and example assets live on main.
Create a conda environment (recommended)¶
Option A: Install from PyPI¶
Option B: Install from source (recommended for examples)¶
Install (editable)¶
Verify¶
llmrouter --version
llmrouter list-routers
python -c "import llmrouter; print(llmrouter.__version__)"
API keys (only for real API calls)¶
llmrouter infer (without --route-only) calls an OpenAI-compatible endpoint via LiteLLM and requires API_KEYS.
It supports either a single key or a JSON list of keys (used round-robin).
Single key
Multiple keys (JSON list)
Warning
Do not commit API keys to git. Use environment variables or a secret manager.
Where does api_endpoint come from?¶
For inference, the API base URL is resolved in this order:
llm_data[model_name].api_endpointin yourllm_dataJSONapi_endpointin your router YAML config
If neither is set, inference fails with an "API endpoint not found" error.
Next¶
- Continue with Quickstart.