OpenCode supports 75+ LLM providers in the built-in directory, but the directory doesn’t cover everything. Custom providers let you plug in anything that speaks OpenAI-compatible JSON: haimaker, Ollama, LM Studio, OpenRouter, or your own internal gateway.
This guide walks through the full setup, with the errors you’ll hit and how to fix them.
The two files you need to edit
OpenCode splits config and credentials into two files:
- Config:
~/.config/opencode/opencode.jsoncdefines providers, models, and behavior - Credentials:
~/.local/share/opencode/auth.jsonstores API keys
Both need to be updated to add a new provider. Editing one without the other is the most common setup mistake.
Step 1: Configure the provider
Open (or create) ~/.config/opencode/opencode.jsonc. Add a provider block with your custom provider. Here’s the pattern using haimaker.ai as an example:
{
"provider": {
"haimaker": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "https://api.haimaker.ai/v1"
},
"models": {
"z-ai/glm-4.6": {},
"minimax/minimax-m2.5": {},
"qwen/qwen3-coder": {}
}
}
}
}
What each field does:
npm: the SDK adapter. For any OpenAI-compatible API, use@ai-sdk/openai-compatible. OpenCode loads the adapter on demand.options.baseURL: the base URL for the provider’s API. Should end at/v1or whatever version prefix the provider uses.models: the models you want available in OpenCode. The keys must match exactly what the provider’s API accepts in themodelfield of a completion request.
You can add as many custom providers as you want, each as a separate entry under provider.
Step 2: Add the API key
OpenCode stores credentials in a separate file so config can be checked into version control without leaking secrets. Add your key to ~/.local/share/opencode/auth.json:
{
"haimaker": {
"type": "api",
"key": "your-haimaker-api-key"
}
}
The top-level key (haimaker in the example) must match the provider name from opencode.jsonc. If they don’t match, OpenCode will show the provider in /models but fail with an auth error on the first request.
Step 3: Restart and verify
OpenCode doesn’t hot-reload provider configs. Quit OpenCode completely and restart it. Then run:
/models
You should see your provider listed with the models you added. Switch to one with /model <provider>/<model-id>. For example: /model haimaker/minimax/minimax-m2.5.
Common errors and fixes
“Provider not found” when switching models The provider entry isn’t being read. Three things to check:
- Is
opencode.jsoncvalid JSON with comments? A stray comma or unclosed bracket will silently break the whole config. - Did you fully restart OpenCode after editing?
- Is the file at the exact path
~/.config/opencode/opencode.jsonc? Not.json, not~/.opencode/.
“Authentication failed” on the first request
The auth.json file isn’t matching up. Check:
- The top-level key in
auth.jsonexactly matches the provider name inopencode.jsonc. - The
typefield is"api"(not"oauth"or something else). - The
keyis the raw API key string, not wrapped inBeareror anything else.
Test your key with curl to rule out the key itself being wrong:
curl https://api.haimaker.ai/v1/models \
-H "Authorization: Bearer your-haimaker-api-key"
If that returns a model list, the key is fine and the issue is in auth.json.
Model shows up but requests fail with “model not found”
The model ID in your config doesn’t match what the provider’s API expects. Custom providers pass the model ID through unchanged, so qwen/qwen3-coder in your config must match the exact string the API wants. Check the provider’s model list endpoint or docs.
Custom provider works but built-in providers stopped working
You probably overwrote the default config block. OpenCode merges your opencode.jsonc on top of defaults, so you should only add new entries under provider, not replace the whole section.
Why route through a gateway like haimaker
If you’re planning to add more than one custom provider, consider routing through a single gateway instead. haimaker.ai gives you access to MiniMax, Qwen, Claude, GPT, Gemini, Grok, and dozens of open-source models through one API key, one baseURL, and one provider entry. You save yourself the config sprawl of maintaining five different provider blocks and five different API keys.
The tradeoff is a small routing overhead (usually a few milliseconds) in exchange for a much simpler config. For most OpenCode users juggling multiple model providers, it’s worth it.
Full example: haimaker + Ollama
Here’s what a real setup looks like with a gateway and a local model provider together:
{
"provider": {
"haimaker": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "https://api.haimaker.ai/v1"
},
"models": {
"anthropic/claude-sonnet-4-6": {},
"openai/gpt-5.4-mini": {},
"minimax/minimax-m2.5": {},
"qwen/qwen3-coder": {}
}
},
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3-coder:30b": {},
"gemma4:27b": {}
}
}
}
}
And the matching auth.json:
{
"haimaker": {
"type": "api",
"key": "your-haimaker-api-key"
},
"ollama": {
"type": "api",
"key": "ollama"
}
}
Ollama doesn’t actually check API keys, but OpenCode still expects an entry in auth.json. Use any non-empty string.
That’s it
Once the provider is registered and the key is set, OpenCode treats your custom models exactly like the built-in ones. Tool calling, streaming, context windows: all work the same. The configuration pattern is the same for any OpenAI-compatible API, so once you’ve done this once, adding the next provider takes about two minutes.