Current as of March 2026. GPT-5 Nano is OpenAI’s cheapest option right now — $0.05/M input with a 272K context window and 128K output. The specs look almost too good for the price, and in some ways they are, but for specific tasks it genuinely earns its place.

Specs

ProviderOpenAI
Input cost$0.05 / M tokens
Output cost$0.40 / M tokens
Context window272K tokens
Max output128K tokens
ParametersN/A
Featuresfunction_calling, vision, reasoning

What it’s good at

Price

$0.05/M input is cheap enough to throw at problems you’d normally skip due to cost. High-volume batch jobs, log scanning, bulk tagging — the economics work out at this tier.

Output Buffer

128K max output at this price is legitimately unusual. Long-form generation tasks that require expensive models elsewhere can sometimes run here instead.

Basic Function Calling

Handles structured tool calls well enough for straightforward API integrations without the overhead of heavier models.

Where it falls short

Reasoning Quality

The reasoning feature is scaled down significantly. Multi-step logical deduction — the kind the o-series handles well — falls apart here. Don’t expect GPT-5-level thinking.

Rate Limiting

OpenAI often throttles Nano tier more aggressively than larger models. During heavy agent bursts, 429 errors are a real operational concern worth planning for.

Best use cases with OpenClaw

  • Document Routing — Scan large amounts of text cheaply to decide which specialized agent handles a task. The 272K window means you can fit substantial inputs.
  • Simple Function Calling — Works reliably for basic API integrations where the schema is predictable and the logic isn’t complex.

Not ideal for

  • Cross-File Code Analysis — It loses the thread on complex inter-dependency tracking across large codebases.
  • Mathematical Reasoning — The model hallucinates logic steps on anything beyond straightforward arithmetic. Don’t trust it with proofs.

Run it through Haimaker

Skip juggling API keys. One Haimaker key gives you access to every model on the platform. Tell OpenClaw:

Add Haimaker as a custom provider to my OpenClaw config. Use these details:

- Provider name: haimaker
- Base URL: https://api.haimaker.ai/v1
- API key: [PASTE YOUR HAIMAKER API KEY HERE]
- API type: openai-completions

Add the auto-router model:
- haimaker/auto (reasoning: false, context: 128000, max tokens: 32000)

Create an alias "auto" for easy switching. Apply the config when done.

Or skip model selection entirely — Haimaker’s auto-router picks the best model for each task so you don’t have to.

OpenClaw setup

Export OPENAI_API_KEY and pass openai/gpt-5-nano as the model ID.

export OPENAI_API_KEY="your-key-here"

That’s it. OpenClaw picks up OpenAI models automatically.

How it compares

  • vs Claude 3 Haiku — Haiku is faster for short prompts; Nano wins on context depth and input cost.
  • vs Gemini 1.5 Flash — Flash has a 1M context window (larger than Nano), but Nano integrates more cleanly with OpenClaw’s tool-calling logic.

Bottom line

GPT-5 Nano is the right call for high-volume, low-complexity tasks where you’d otherwise overpay. Just stay realistic about what the reasoning feature can actually do.

TRY GPT 5 NANO ON HAIMAKER


For setup instructions, see our API key guide. For all available models, see the complete models guide.