Current as of March 2026. The original GPT-4. At $30/$60 per million tokens with an 8K context window, there’s almost no scenario where you should choose this over GPT-4o in 2026. I’m including it because it still shows up in legacy configs and people sometimes ask about it.

Specs

ProviderOpenAI
Input cost$30 / M tokens
Output cost$60 / M tokens
Context window8K tokens
Max output4K tokens
ParametersN/A
Featuresfunction_calling

What it’s good at

Logic Stability

It’s methodical and literal. For tasks where you need a model to follow instructions exactly without creative interpretation, GPT-4 does this reliably.

Function Calling

The tool use implementation is rock solid. It predates some of the quirks introduced in newer model versions.

Where it falls short

The Price

$30/M input when GPT-4o is $2.50/M for a better model. There’s no defending this for new projects.

8K Context

You’ll hit this limit before you finish pasting in a medium-sized codebase. For anything agentic, this is a genuine blocker.

Best use cases with OpenClaw

  • Prompt Debugging — Running a failing prompt against GPT-4 to isolate whether the problem is your prompt or a weaker model.
  • Legacy Workflow Continuity — If you built something on GPT-4 and migrating isn’t worth the effort right now.

Not ideal for

  • RAG Pipelines — 8K context means you can fit maybe two or three retrieved chunks. That’s not enough.
  • Agentic Loops — At $60/M output, a long code generation task will cost you significantly.

Run it through Haimaker

Skip juggling API keys. One Haimaker key gives you access to every model on the platform. Tell OpenClaw:

Add Haimaker as a custom provider to my OpenClaw config. Use these details:

- Provider name: haimaker
- Base URL: https://api.haimaker.ai/v1
- API key: [PASTE YOUR HAIMAKER API KEY HERE]
- API type: openai-completions

Add the auto-router model:
- haimaker/auto (reasoning: false, context: 128000, max tokens: 32000)

Create an alias "auto" for easy switching. Apply the config when done.

Or skip model selection entirely — Haimaker’s auto-router picks the best model for each task so you don’t have to.

OpenClaw setup

OpenClaw has native support for OpenAI. Just set the OPENAI_API_KEY environment variable and the framework handles the connection automatically with no extra config.

export OPENAI_API_KEY="your-key-here"

That’s it. OpenClaw picks up OpenAI models automatically.

How it compares

  • vs Claude 3.5 Sonnet — Sonnet is faster, has a 200K context window, and costs $3/$15.
  • vs GPT-4o — GPT-4o is the direct upgrade: 128K context, much lower latency, roughly 1/12th the input cost.

Bottom line

Don’t start new projects on this. If you’re already on it, migrate to GPT-4o. The performance is better and the cost is dramatically lower.

TRY GPT 4 ON HAIMAKER


For setup instructions, see our API key guide. For all available models, see the complete models guide.