Deepseek V3.2 is 2.7× cheaper on input tokens than GPT 5.4 Mini. GPT 5.4 Mini has a longer context window (272K vs 164K). Both models are accessible via the haimaker.ai OpenAI-compatible API at https://api.haimaker.ai/v1.
openai/gpt-5.4-mini
deepseek/deepseek-v3.2
| Spec | GPT 5.4 Mini | Deepseek V3.2 |
|---|---|---|
| Provider | OpenAI | DeepSeek |
| Full ID | openai/gpt-5.4-mini | deepseek/deepseek-v3.2 |
| Mode | chat | chat |
| Parameters | N/A | N/A |
| Context window | 272K | 164K |
| Max output | 128K | 164K |
| Input price (per 1M) | $0.75 | $0.28 |
| Output price (per 1M) | $4.50 | $0.40 |
| License | N/A | N/A |
| Architecture | N/A | N/A |
| Feature | GPT 5.4 Mini | Deepseek V3.2 |
|---|---|---|
| Function Calling | ✓ Supported | ✓ Supported |
| Reasoning | ✓ Supported | ✓ Supported |
| Vision | ✓ Supported | — |
| Web Search | ✓ Supported | — |
Deepseek V3.2 is cheaper at $0.28 per 1M input tokens vs $0.75 for GPT 5.4 Mini.
GPT 5.4 Mini accepts up to 272K input tokens vs 164K for Deepseek V3.2.
Yes. haimaker.ai exposes both openai/gpt-5.4-mini and deepseek/deepseek-v3.2 on the same OpenAI-compatible endpoint at https://api.haimaker.ai/v1, so you can switch between them by changing the model parameter in your request.
One OpenAI-compatible endpoint. Switch between them by changing the model parameter.