Current as of March 2026. GPT-5 is OpenAI’s top general-purpose model right now — native reasoning, 272K context, and a 128K output ceiling that’s genuinely useful for code generation. The $10/M output cost is the thing most teams choke on.

Specs

ProviderOpenAI
Input cost$1.25 / M tokens
Output cost$10 / M tokens
Context window272K tokens
Max output128K tokens
ParametersN/A
Featuresfunction_calling, vision, reasoning

What it’s good at

Reasoning Depth

The built-in reasoning cuts down on hallucinations for complex multi-step workflows. Tasks that cause GPT-4o to hallucinate steps — things like long dependency chains or cross-file refactors — tend to hold up better here.

Large Output Window

128K max output is rare. You can generate entire project structures, long specs, or comprehensive test suites in one call instead of chunking and reassembling.

Where it falls short

Latency

Reasoning adds real wait time before the first token appears. Fine for batch workflows, painful for anything interactive.

Output Pricing

The 8:1 output-to-input price ratio ($10 vs $1.25) bites hard if your agents generate verbose responses. Profile your actual output token usage before committing.

Best use cases with OpenClaw

  • Agentic Coding — A 272K context window is large enough to feed in substantial portions of a real codebase for refactoring or review.
  • Complex Logic Tasks — Deep planning, constraint satisfaction, multi-step derivations. This is where the reasoning pays for itself.

Not ideal for

  • Simple Tasks — Running basic classification or summarization through GPT-5 is burning money. Use GPT-4o-mini.
  • Real-time Interfaces — The reasoning delay makes it a bad fit for anything where users expect fast responses.

Run it through Haimaker

Skip juggling API keys. One Haimaker key gives you access to every model on the platform. Tell OpenClaw:

Add Haimaker as a custom provider to my OpenClaw config. Use these details:

- Provider name: haimaker
- Base URL: https://api.haimaker.ai/v1
- API key: [PASTE YOUR HAIMAKER API KEY HERE]
- API type: openai-completions

Add the auto-router model:
- haimaker/auto (reasoning: false, context: 128000, max tokens: 32000)

Create an alias "auto" for easy switching. Apply the config when done.

Or skip model selection entirely — Haimaker’s auto-router picks the best model for each task so you don’t have to.

OpenClaw setup

Set OPENAI_API_KEY and OpenClaw handles the rest.

export OPENAI_API_KEY="your-key-here"

That’s it. OpenClaw picks up OpenAI models automatically.

How it compares

  • vs Claude 3.5 Sonnet — Claude is faster and better for creative tasks; GPT-5 has a larger context window and stronger reasoning for engineering problems.
  • vs GPT-4o — GPT-4o is faster and cheaper for high-throughput work. Use GPT-5 when the task actually needs to think.

Bottom line

GPT-5 earns its price tag on genuinely hard reasoning tasks. For everything else, the math doesn’t work out.

TRY GPT 5 ON HAIMAKER


For setup instructions, see our API key guide. For all available models, see the complete models guide.