Current as of April 2026. DeepSeek has become the primary alternative for developers who need high-performance coding assistance without the aggressive pricing of US-based labs. For OpenCode users, these models offer some of the most reliable tool-calling performance in the open-weights ecosystem, making them ideal for CLI-driven refactoring and automated boilerplate generation.

The quick answer

ModelInput / OutputContextBest For
DeepSeek V3.1$0.15 / $0.7533KThe Budget Refactorer
DeepSeek V3.2$0.26 / $0.38164KThe Long-Context Workhorse
DeepSeek V3$0.32 / $0.89164KThe Budget Refactorer
DeepSeek R1$0.70 / $2.5064KThe Logic Specialist

Start with DeepSeek V3.2 unless you have a specific reason to pick another. It is the most balanced model in the lineup, offering a massive 164K context window and an identical 164K output cap for just $0.26/M input. This makes it the only viable choice for repo-wide analysis where V3.1’s 33K limit or V3’s 8K output cap would fail.

DeepSeek V3.1 — The Budget Refactorer

At $0.15/M input, this is the cheapest way to run OpenCode operations. It is best suited for small scripts or single-file edits because the 33K context window will quickly truncate if you try to pipe in a full project tree or multiple large classes.

DeepSeek V3.2 — The Long-Context Workhorse

This is the definitive choice for OpenCode. The 164K context and output caps allow for massive code generation tasks that would otherwise be cut off. It is significantly more capable than V3.1 for complex architectural tasks while remaining cheaper than the R1 reasoning model.

DeepSeek V3 — The Budget Refactorer

At $0.15/M input, this is the cheapest way to run OpenCode operations. It is best suited for small scripts or single-file edits because the 33K context window will quickly truncate if you try to pipe in a full project tree or multiple large classes.

DeepSeek R1 — The Logic Specialist

Use R1 when you are stuck on a complex algorithmic bug or a gnarly regex that V3.2 cannot solve. It is the most expensive at $0.7/M input and the 8K output cap limits its use for large-scale refactoring, but its reasoning chain makes it the most reliable for difficult tool-calling sequences.

Setup in OpenCode

To integrate DeepSeek, add a new entry to the ‘providers’ array in ~/.config/opencode/opencode.jsonc using the ‘openai-compatible’ type. Set the base URL to ‘https://api.deepseek.com’ and ensure your API key is stored in ~/.local/share/opencode/auth.json under the matching provider name. OpenCode’s use of @ai-sdk/openai-compatible ensures that function calling works out of the box for file operations.

Running through haimaker.ai

All DeepSeek models are also available through haimaker.ai. Wire haimaker as a single OpenAI-compatible provider and you get DeepSeek alongside every other frontier model:

{
  "provider": {
    "haimaker": {
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "baseURL": "https://api.haimaker.ai/v1"
      }
    }
  }
}

Direct provider setup

OpenCode ships with a built-in preset for DeepSeek. You do not need to configure a custom provider — just drop your API key into ~/.local/share/opencode/auth.json:

{
  "deepseek": {
    "type": "api",
    "key": "your-deepseek-api-key"
  }
}

Restart OpenCode and DeepSeek models appear under /models. For providers not in the built-in directory (or to hit them through a gateway like haimaker), see the custom provider guide.

Bottom line

For daily CLI coding, DeepSeek V3.2 provides the best context-to-price ratio available. Keep R1 in your config as a specialized tool for solving difficult logic puzzles that standard chat models fail to grasp.

USE DEEPSEEK IN OPENCODE WITH HAIMAKER


See our OpenCode custom provider guide. See our Haimaker + OpenCode setup.