🔗Supported Models

Boxxy connects to LLM providers via rig, an open-source Rust framework for building LLM-powered applications. Any provider supported by rig can be added to Boxxy.

Configure your API keys in =Preferences= → =APIs=, then assign models to each function in =Preferences= → =Model Selection=.


🔗Google Gemini

ModelThinking
Gemini 3.1 ProLow · Medium · High
Gemini 3.1 Flash LiteMinimal · Low · Medium · High

🔗Anthropic Claude

ModelThinking
Claude Opus 4.6Extended
Claude Sonnet 4.6Extended · Adaptive

🔗OpenAI

ModelReasoning effort
GPT-5.4None · Low · Medium · High · XHigh
GPT-5.4 MiniNone · Low · Medium · High · XHigh
GPT-5.4 NanoNone · Low · Medium · High · XHigh

🔗DeepSeek

ModelNotes
DeepSeek-V4-Pro1M context · 384K max output
DeepSeek-V4-Flash1M context · 384K max output

🔗Ollama

Local models via Ollama — enter any model name you have pulled (e.g. llama3, mistral, qwen2.5-coder). No thinking controls are exposed since capabilities vary per model.


🔗OpenRouter

Any model available on OpenRouter — enter the full model string (e.g. mistralai/mistral-7b-instruct). No thinking controls are exposed.


🔗Missing a provider or model?

New providers and models are added regularly. If something you need is missing, open an issue and we'll look into adding it.