openai/gpt-realtime-miniOpenAI's compact real-time model for low-latency streaming conversations.
| Mode | chat |
| Context Window | 128K tokens |
| Max Output | 4K tokens |
| Function Calling | Supported |
| Vision | - |
| Reasoning | - |
| Web Search | - |
| Url Context | - |
from openai import OpenAI
client = OpenAI(
base_url="https://api.haimaker.ai/v1",
api_key="YOUR_API_KEY",
)
response = client.chat.completions.create(
model="openai/gpt-realtime-mini",
messages=[
{"role": "user", "content": "Hello, how are you?"}
],
)
print(response.choices[0].message.content)OpenAI-compatible endpoint. Start building in minutes.