Quickstart Interactive Setup
The fastest way to get started is with the interactive setup wizard:- API Key entry (with validation)
- Model selection (with recommendations)
- Connection testing (to verify everything works)
- Provider-specific options (like context window for Ollama)
Supported LLM Providers
BeeAI supports a wide range of language model providers:Cloud Providers
- Anthropic Claude
- Cerebras - has a free tier
- Chutes - has a free tier
- Cohere - has a free tier
- DeepSeek
- Google Gemini - has a free tier
- GitHub Models - has a free tier
- Groq - has a free tier
- IBM watsonx
- Mistral - has a free tier
- Moonshot AI
- NVIDIA NIM
- OpenAI
- OpenRouter - has some free models
- Perplexity
- together.ai - has a free tier
Local Providers
- Ollama
- Jan
Custom Providers via LLM Gateway
If you have a custom OpenAI-compatible API endpoint, you can configure it during the interactive setup viabeeai model setup
by selecting “Other (RITS, vLLM, …)” and providing your API URL.
BeeAI includes a built-in LLM gateway that provides a unified OpenAI-compatible API endpoint. This is useful when you want to:
- Point existing agents to BeeAI instead of directly to LLM providers
- Centrally manage API keys and provider configurations
- Switch providers without reconfiguring individual agents
This is a POST-only API endpoint for programmatic use. Use curl or OpenAI-compatible clients to interact with it.
- Authentication with your configured provider
- Provider-specific request/response formatting
- Both streaming and non-streaming responses
- Request validation and error responses