Current as of April 2026. xAI’s Grok family is built for developers who need massive context windows and aggressive pricing. For OpenCode users, these models offer a high-performance alternative to the standard OpenAI or Anthropic stacks, particularly when dealing with large codebase indexing and high-volume tool calls.
The quick answer
| Model | Input / Output | Context | Best For |
|---|---|---|---|
| Grok 4.1 Fast | $0.20 / $0.50 | 2M | The 2M Context Standard |
| Grok 4 Fast | $0.20 / $0.50 | 2M | Redundant Legacy Alternative |
| Grok Code Fast | $0.20 / $1.50 | 256K | High-Volume Generation |
| Grok 3 Mini | $0.30 / $0.50 | 131K | Budget Logic Specialist |
| Grok 3 Mini Fast | $0.60 / $4.00 | 131K | Budget Logic Specialist |
| Grok 2 | $2.00 / $10 | 131K | Legacy Stability |
| Grok 2 Vision | $2.00 / $10 | 33K | Legacy Stability |
| Grok 4.20 | $2.00 / $6.00 | 2M | Premium Architectural Reasoning |
Start with Grok 4.1 Fast unless you have a specific reason to pick another. It provides the best price-to-performance ratio with $0.2/M input pricing and a massive 2M token context window, making it the most cost-effective way to ingest an entire repository into OpenCode.
Grok 4.1 Fast — The 2M Context Standard
This is the current sweet spot for OpenCode users. You get a 2M token window for $0.2/M input, which is essential for mapping large projects. Tool calling is snappy, and the reasoning capabilities handle most complex refactors without the premium price of the 4.20 tier.
Grok 4 Fast — Redundant Legacy Alternative
Grok 4 Fast is nearly identical to Grok 4.1 Fast in pricing and context specs. Use this only if 4.1 Fast is hitting rate limits or if you need a fallback version for a specific tool-calling behavior.
Grok Code Fast — High-Volume Generation
While it lacks the 2M context of the 4-series, it offers a massive 256K output limit compared to the 30K cap on Grok 4.1 Fast. Pick this for generating huge boilerplate files or performing massive file-wide migrations where the output length is the primary constraint.
Grok 3 Mini — Budget Logic Specialist
At $0.3/M input and 131K context, this is a solid choice for quick unit tests or documentation lookups. It handles simple reasoning tasks with lower latency than the 4-series models while remaining cheaper than Grok 2.
Grok 3 Mini Fast — Budget Logic Specialist
At $0.3/M input and 131K context, this is a solid choice for quick unit tests or documentation lookups. It handles simple reasoning tasks with lower latency than the 4-series models while remaining cheaper than Grok 2.
Grok 2 — Legacy Stability
Grok 2 is the older, more expensive predecessor ($2/M input). It should only be used if you find the newer 4-series models are hallucinating tool schemas or failing on specific OpenCode CLI commands that require a more established model version.
Grok 2 Vision — Legacy Stability
Grok 2 is the older, more expensive predecessor ($2/M input). It should only be used if you find the newer 4-series models are hallucinating tool schemas or failing on specific OpenCode CLI commands that require a more established model version.
Grok 4.20 — Premium Architectural Reasoning
This is the high-tier option at $2/M input and $6/M output. It maintains the 2M context window but provides more robust reasoning than the ‘Fast’ variants, making it the preferred choice for complex architectural changes and deep logic bugs.
Setup in OpenCode
To use Grok with OpenCode, add your xAI API key to ~/.local/share/opencode/auth.json. Configure the provider in ~/.config/opencode/opencode.jsonc using the @ai-sdk/openai-compatible adapter with the base URL set to https://api.x.ai/v1. Ensure your model IDs match the xai/ prefix.
Running through haimaker.ai
All Grok models are also available through haimaker.ai. Wire haimaker as a single OpenAI-compatible provider and you get Grok alongside every other frontier model:
{
"provider": {
"haimaker": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "https://api.haimaker.ai/v1"
}
}
}
}
Direct provider setup
OpenCode ships with a built-in preset for xAI. You do not need to configure a custom provider — just drop your API key into ~/.local/share/opencode/auth.json:
{
"xai": {
"type": "api",
"key": "your-xai-api-key"
}
}
Restart OpenCode and xAI models appear under /models. For providers not in the built-in directory (or to hit them through a gateway like haimaker), see the custom provider guide.
Bottom line
Grok is the best choice for OpenCode users who need to process millions of tokens of source code on a tight budget, provided they stick to the 4.1 Fast model.
USE GROK IN OPENCODE WITH HAIMAKER
See our OpenCode custom provider guide. See our Haimaker + OpenCode setup.