Current as of April 2026. OpenAI remains the standard for OpenCode users because of their predictable tool-calling behavior and the sheer range of context options. Whether you are running a quick local refactor or ingesting a massive monorepo, their API offers a tier that fits the budget without breaking the CLI’s logic flow.
The quick answer
| Model | Input / Output | Context | Best For |
|---|---|---|---|
| gpt-oss-20b | $0.03 / $0.11 | 131K | The Absolute Floor for Tool-Calling |
| gpt-oss-120b | $0.04 / $0.19 | 131K | The Cost-Efficient Logic Tier |
| GPT 5 Nano | $0.05 / $0.40 | 400K | The High-Volume Refactor King |
| gpt-oss-safeguard-20b | $0.08 / $0.30 | 131K | The Compliance-First Alternative |
| GPT 4.1 Nano | $0.10 / $0.40 | 1.0M | The Full-Repo Ingestion Tool |
| GPT 4o Mini | $0.15 / $0.60 | 128K | The Reliable Legacy Baseline |
| GPT-5.4 Nano | $0.20 / $1.25 | 400K | The Documentation Research Specialist |
| GPT 5 Mini | $0.25 / $2.00 | 400K | The Premium Logic Standard |
Start with GPT 5 Nano unless you have a specific reason to pick another. It offers the best utility-to-price ratio for developers. At $0.05 per million input tokens and a massive 128K output cap, it handles large-scale file rewrites that would truncate on cheaper models, while the 400K context window is enough for most project-wide searches.
gpt-oss-20b — The Absolute Floor for Tool-Calling
This is the cheapest entry point at $0.03/M input. It is strictly for simple scripts and basic CLI commands where you do not need deep architectural reasoning. Use it if you are optimizing for cost over correctness in trivial automation tasks.
gpt-oss-120b — The Cost-Efficient Logic Tier
For an extra $0.01/M on input compared to the 20b, this model provides significantly more stable reasoning for complex boolean logic. It is the better choice for local development loops where you need the model to understand branching logic without paying the GPT 5 premium.
GPT 5 Nano — The High-Volume Refactor King
The 128K max output is the standout feature here. While other models in this price range choke on 200-line file rewrites, GPT 5 Nano sustains long generations reliably. It effectively renders the gpt-oss-120b obsolete for anything involving vision or massive context needs.
gpt-oss-safeguard-20b — The Compliance-First Alternative
This is nearly identical to the standard gpt-oss-20b but with higher latency and stricter output filters. Unless your enterprise environment mandates specific safety tuning that blocks the standard 20b, skip this model to avoid the 2.6x price hike on input tokens.
GPT 4.1 Nano — The Full-Repo Ingestion Tool
Pick this model specifically for its 1.0M context window. It is the only option in the family that can ingest an entire medium-sized codebase in a single prompt. The 33K output cap is small, so use it for analysis and search rather than generating massive new features.
GPT 4o Mini — The Reliable Legacy Baseline
GPT 4o Mini is the safe bet for tool-calling reliability if you find the newer GPT 5 series is being too creative with function arguments. It has a smaller 128K context window and a lower 16K output cap, making it less versatile than the GPT 5 Nano for the same general price bracket.
GPT-5.4 Nano — The Documentation Research Specialist
The inclusion of web_search makes this the only model in the lineup capable of looking up recent API changes or library documentation not present in the training data. The $1.25/M output cost is steep, so reserve it for debugging issues with bleeding-edge dependencies.
GPT 5 Mini — The Premium Logic Standard
When you cannot afford a logic error in a production migration, use GPT 5 Mini. It shares the 400K context and 128K output of the Nano version but demonstrates higher consistency in complex reasoning tasks. It is five times more expensive on input than the Nano, so use it sparingly.
Setup in OpenCode
To configure OpenAI in OpenCode, edit ~/.config/opencode/opencode.jsonc to include your chosen model under the ‘provider’ key. You must place your API key in ~/.local/share/opencode/auth.json. OpenCode handles the connection via @ai-sdk/openai-compatible, so ensure your base URL is set to the standard OpenAI endpoint unless using a proxy.
Running through haimaker.ai
All OpenAI models are also available through haimaker.ai. Wire haimaker as a single OpenAI-compatible provider and you get OpenAI alongside every other frontier model:
{
"provider": {
"haimaker": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "https://api.haimaker.ai/v1"
}
}
}
}
Direct provider setup
OpenCode ships with a built-in preset for OpenAI. You do not need to configure a custom provider — just drop your API key into ~/.local/share/opencode/auth.json:
{
"openai": {
"type": "api",
"key": "your-openai-api-key"
}
}
Restart OpenCode and OpenAI models appear under /models. For providers not in the built-in directory (or to hit them through a gateway like haimaker), see the custom provider guide.
Bottom line
For daily coding via the OpenCode CLI, GPT 5 Nano is the most logical choice, providing massive context and output limits at a price point that allows for constant use.
USE OPENAI IN OPENCODE WITH HAIMAKER
See our OpenCode custom provider guide. See our Haimaker + OpenCode setup.