Current as of March 2026. O1 is OpenAI’s reasoning model: before it writes a word of output, it works through the problem internally. That hidden reasoning phase is why it’s slower and more expensive than chat models — and also why it handles problems that make GPT-4o stumble. At $15/$60 per million tokens, it’s not for casual use.
Specs
| Provider | OpenAI |
| Input cost | $15 / M tokens |
| Output cost | $60 / M tokens |
| Context window | 200K tokens |
| Max output | 100K tokens |
| Parameters | N/A |
| Features | function_calling, vision, reasoning |
What it’s good at
Hard Reasoning Problems
Multi-step proofs, symbolic math, complex logic chains — this is what it’s built for. If GPT-4o keeps getting the answer wrong, O1 often gets it right.
Long Output
100K max output tokens. You can generate substantial code or documentation in a single pass.
Where it falls short
Latency
The internal reasoning phase can take seconds to minutes before the first output token. Users watching a blank screen will assume something broke.
Cost
$60/M output. High-volume agent loops will get expensive fast. Be selective about what you send here.
Best use cases with OpenClaw
- Architectural Refactoring — Feed a 200K context of codebase and let it reason through dependency changes. The reasoning quality justifies the cost on hard problems.
- Scientific and Mathematical Analysis — Dense formulas, logical inconsistencies in research papers, proofs. This is where O1 earns its price.
Not ideal for
- Basic Summarization — Wasteful. GPT-4o-mini handles this for pennies.
- User-facing Chat — The latency alone will kill the experience.
Run it through Haimaker
Skip juggling API keys. One Haimaker key gives you access to every model on the platform. Tell OpenClaw:
Add Haimaker as a custom provider to my OpenClaw config. Use these details:
- Provider name: haimaker
- Base URL: https://api.haimaker.ai/v1
- API key: [PASTE YOUR HAIMAKER API KEY HERE]
- API type: openai-completions
Add the auto-router model:
- haimaker/auto (reasoning: false, context: 128000, max tokens: 32000)
Create an alias "auto" for easy switching. Apply the config when done.
Or skip model selection entirely — Haimaker’s auto-router picks the best model for each task so you don’t have to.
OpenClaw setup
OpenClaw handles O1 natively through the OpenAI provider. Simply export your OPENAI_API_KEY and set the model ID to openai/o1 in your agent settings.
export OPENAI_API_KEY="your-key-here"
That’s it. OpenClaw picks up OpenAI models automatically.
How it compares
- vs Claude 3.5 Sonnet — Sonnet is much faster and better for everyday coding. O1 wins when the problem requires genuine reasoning depth, not just competent code generation.
- vs DeepSeek-R1 — R1 gets close on reasoning benchmarks at a fraction of the price. If API reliability matters less to you, R1 is worth testing first.
Bottom line
O1 is a specialized reasoning engine. Use it for the hard problems — debugging subtle logic, architectural decisions, math. Route everything else to cheaper models.
For setup instructions, see our API key guide. For all available models, see the complete models guide.