Current as of March 2026. GPT-5.4 Pro is the most expensive model in this guide by a significant margin: $30/M input, $180/M output. That’s not a typo. The 1.1M context and 128K output are the same as base 5.4, but the reasoning capability is substantially stronger. This is a model for problems where being wrong is expensive.
Specs
| Provider | OpenAI |
| Input cost | $30 / M tokens |
| Output cost | $180 / M tokens |
| Context window | 1.1M tokens |
| Max output | 128K tokens |
| Parameters | N/A |
| Features | function_calling, vision, reasoning, web_search |
What it’s good at
Reasoning Quality
The ceiling on multi-step logic is genuinely higher here than base 5.4. For architectural decisions, complex dependency analysis, or finding bugs in subtle code, that matters.
1.1M Context with Web Search
Combine deep context with live web access. Research agents that need to synthesize large internal documents alongside current external information.
Instruction Following at Long Context
It stays on task further into a long prompt than most models. Useful when you have elaborate system prompts and dense context.
Where it falls short
The Price
$180/M output is not for casual use. A single large generation task can cost dollars. Run the math before you build anything at scale on this.
Latency
Heavy reasoning means slow TTFT. Not suitable for interactive applications.
Best use cases with OpenClaw
- High-Stakes Refactoring — When a wrong cross-file dependency change could break production and you need the model to catch it.
- Complex Data Synthesis — Contradictions in large datasets, ambiguous requirements, multi-source research — this is where the reasoning quality earns its price.
Not ideal for
- Anything High-Volume — $180/M output at any meaningful scale is prohibitive. Do the math.
- Real-time UI — Reasoning overhead makes it too slow for interactive use cases.
Run it through Haimaker
Skip juggling API keys. One Haimaker key gives you access to every model on the platform. Tell OpenClaw:
Add Haimaker as a custom provider to my OpenClaw config. Use these details:
- Provider name: haimaker
- Base URL: https://api.haimaker.ai/v1
- API key: [PASTE YOUR HAIMAKER API KEY HERE]
- API type: openai-completions
Add the auto-router model:
- haimaker/auto (reasoning: false, context: 128000, max tokens: 32000)
Create an alias "auto" for easy switching. Apply the config when done.
Or skip model selection entirely — Haimaker’s auto-router picks the best model for each task so you don’t have to.
OpenClaw setup
OpenClaw includes native support for this model. Export your OPENAI_API_KEY to your environment and the framework handles the rest automatically.
export OPENAI_API_KEY="your-key-here"
That’s it. OpenClaw picks up OpenAI models automatically.
How it compares
- vs GPT-5.4 — Base 5.4 costs $15/M output. Use Pro only when you’ve identified that reasoning quality, not context size, is the bottleneck.
- vs Claude 3.5 Sonnet — Sonnet costs $3/$15 and writes excellent code. For pure reasoning depth on ambiguous problems, 5.4 Pro wins.
Bottom line
Reserve this for problems where accuracy is worth more than cost. For everything else, base 5.4 or 5.2 is more sensible.
For setup instructions, see our API key guide. For all available models, see the complete models guide.