moonshotai/kimi-k2.5Kimi K2.5 is a chat model by Moonshotai. It supports a 262K token context window. Supports function calling, vision.
<|media_start|> is incorrect; it has been replaced with <|media_begin|> in the chat template.Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.
| | |
|:---:|:---:|
| Architecture | Mixture-of-Experts (MoE) |
| Total Parameters | 1T |
| Activated Parameters | 32B |
| Number of Layers (Dense layer included) | 61 |
| Number of Dense Layers | 1 |
| Attention Hidden Dimension | 7168 |
| MoE Hidden Dimension (per Expert) | 2048 |
| Number of Attention Heads | 64 |
| Number of Experts | 384 |
| Selected Experts per Token | 8 |
| Number of Shared Experts | 1 |
| Vocabulary Size | 160K |
| Context Length | 256K |
| Attention Mechanism | MLA |
| Activation Function | SwiGLU |
| Vision Encoder | MoonViT |
| Parameters of Vision Encoder | 400M |
| Benchmark | Kimi K2.5 (Thinking) |
GPT-5.2 (xhigh) |
Claude 4.5 Opus (Extended Thinking) |
Gemini 3 Pro (High Thinking Level) |
DeepSeek V3.2 (Thinking) |
Qwen3-VL- 235B-A22B- Thinking |
|
|---|---|---|---|---|---|---|---|
| Reasoning & Knowledge | |||||||
| HLE-Full | 30.1 | 34.5 | 30.8 | 37.5 | 25.1† | - | |
| HLE-Full (w/ tools) |
50.2 | 45.5 | 43.2 | 45.8 | 40.8† | - | |
| AIME 2025 | 96.1 | 100 | 92.8 | 95.0 | 93.1 | - | |
| HMMT 2025 (Feb) | 95.4 | 99.4 | 92.9* | 97.3* | 92.5 | - | |
| IMO-AnswerBench | 81.8 | 86.3 | 78.5* | 83.1* | 78.3 | - | |
| GPQA-Diamond | 87.6 | 92.4 | 87.0 | 91.9 | 82.4 | - | |
| MMLU-Pro | 87.1 | 86.7* | 89.3* | 90.1 | 85.0 | - | |
| Image & Video | |||||||
| MMMU-Pro | 78.5 | 79.5* | 74.0 | 81.0 | - | 69.3 | |
| CharXiv (RQ) | 77.5 | 82.1 | 67.2* | 81.4 | - | 66.1 | |
| MathVision | 84.2 | 83.0 | 77.1* | 86.1* | - | 74.6 | |
| MathVista (mini) | 90.1 | 82.8* | 80.2* | 89.8* | - | 85.8 | |
| ZeroBench | 9 | 9* | 3* | 8* | - | 4* | |
| ZeroBench (w/ tools) |
11 | 7* | 9* | 12* | - | 3* | |
| OCRBench | 92.3 | 80.7* | 86.5* | 90.3* | - | 87.5 | |
| OmniDocBench 1.5 | 88.8 | 85.7 | 87.7* | 88.5 | - | 82.0* | |
| InfoVQA (val) | 92.6 | 84* | 76.9* | 57.2* | - | 89.5 | |
| SimpleVQA | 71.2 | 55.8* | 69.7* | 69.7* | - | 56.8* | |
| WorldVQA | 46.3 | 28.0 | 36.8 | 47.4 | - | 23.5 | |
| VideoMMMU | 86.6 | 85.9 | 84.4* | 87.6 | - | 80.0 | |
| MMVU | 80.4 | 80.8* | 77.3 | 77.5 | - | 71.1 | |
| MotionBench | 70.4 | 64.8 | 60.3 | 70.3 | - | - | |
| VideoMME | 87.4 | 86.0* | - | 88.4* | - | 79.0 | |
| LongVideoBench | 79.8 | 76.5* | 67.2* | 77.7* | - | 65.6* | |
| LVBench | 75.9 | - | - | 73.5* | - | 63.6 | |
| Coding | |||||||
| SWE-Bench Verified | 76.8 | 80.0 | 80.9 | 76.2 | 73.1 | - | |
| SWE-Bench Pro | 50.7 | 55.6 | 55.4* | - | - | - | |
| SWE-Bench Multilingual | 73.0 | 72.0 | 77.5 | 65.0 | 70.2 | - | |
| Terminal Bench 2.0 | 50.8 | 54.0 | 59.3 | 54.2 | 46.4 | - | |
| PaperBench | 63.5 | 63.7* | 72.9* | - | 47.1 | - | |
| CyberGym | 41.3 | - | 50.6 | 39.9* | 17.3* | - | |
| SciCode | 48.7 | 52.1 | 49.5 | 56.1 | 38.9 | - | |
| OJBench (cpp) | 57.4 | - | 54.6* | 68.5* | 54.7* | - | |
| LiveCodeBench (v6) | 85.0 | - | 82.2* | 87.4* | 83.3 | - | |
| Long Context | |||||||
| Longbench v2 | 61.0 | 54.5* | 64.4* | 68.2* | 59.8* | - | |
| AA-LCR | 70.0 | 72.3* | 71.3* | 65.3* | 64.3* | - | |
| Agentic Search | |||||||
| BrowseComp | 60.6 | 65.8 | 37.0 | 37.8 | 51.4 | - | |
| BrowseComp (w/ctx manage) |
74.9 | 57.8 | 59.2 | 67.6 | - | ||
| BrowseComp (Agent Swarm) |
78.4 | - | - | - | - | - | |
| WideSearch (item-f1) |
72.7 | - | 76.2* | 57.0 | 32.5* | - | |
| WideSearch (item-f1 Agent Swarm) |
79.0 | - | - | - | - | - | |
| DeepSearchQA | 77.1 | 71.3* | 76.1* | 63.2* | 60.9* | - | |
| FinSearchCompT2&T3 | 67.8 | - | 66.2* | 49.9 | 59.1* | - | |
| Seal-0 | 57.4 | 45.0 | 47.7* | 45.5* | 49.5* | - | |
Currently, Kimi-K2.5 is recommended to run on the following inference engines:[!Note]
You can access Kimi-K2.5's API on https://platform.moonshot.ai and we provide OpenAI/Anthropic-compatible API for you. To verify the deployment is correct, we also provide the Kimi Vendor Verifier.
transformers is 4.57.1.
Deployment examples can be found in the Model Deployment Guide.
The usage demos below demonstrate how to call our official API.
For third-party APIs deployed with vLLM or SGLang, please note that:
[!Note]
- Chat with video content is an experimental feature and is only supported in our official API for now.
- The recommended
temperaturewill be1.0for Thinking mode and0.6for Instant mode.
- The recommended
top_pis0.95.
- To use instant mode, you need to pass
{'chat_template_kwargs': {"thinking": False}}inextra_body.
This is a simple chat completion script which shows how to call K2.5 API in Thinking and Instant modes.
import openai
import base64
import requests
def simple_chat(client: openai.OpenAI, model_name: str):
messages = [
{'role': 'system', 'content': 'You are Kimi, an AI assistant created by Moonshot AI.'},
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'which one is bigger, 9.11 or 9.9? think carefully.'}
],
},
]
response = client.chat.completions.create(
model=model_name, messages=messages, stream=False, max_tokens=4096
)
print('====== Below is reasoning_content in Thinking Mode ======')
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
print('====== Below is response in Thinking Mode ======')
print(f'response: {response.choices[0].message.content}')
# To use instant mode, pass {"thinking" = {"type":"disabled"}}
response = client.chat.completions.create(
model=model_name,
messages=messages,
stream=False,
max_tokens=4096,
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
)
print('====== Below is response in Instant Mode ======')
print(f'response: {response.choices[0].message.content}')
K2.5 supports Image and Video input.
The following example demonstrates how to call K2.5 API with image input:
import openai
import base64
import requests
def chat_with_image(client: openai.OpenAI, model_name: str):
url = 'https://huggingface.co/moonshotai/Kimi-K2.5/resolve/main/figures/kimi-logo.png'
image_base64 = base64.b64encode(requests.get(url).content).decode()
messages = [
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'Describe this image in detail.'},
{
'type': 'image_url',
'image_url': {'url': f'data:image/png;base64, {image_base64}'},
},
],
}
]
response = client.chat.completions.create(
model=model_name, messages=messages, stream=False, max_tokens=8192
)
print('====== Below is reasoning_content in Thinking Mode ======')
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
print('====== Below is response in Thinking Mode ======')
print(f'response: {response.choices[0].message.content}')
# Also support instant mode if you pass {"thinking" = {"type":"disabled"}}
response = client.chat.completions.create(
model=model_name,
messages=messages,
stream=False,
max_tokens=4096,
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
)
print('====== Below is response in Instant Mode ======')
print(f'response: {response.choices[0].message.content}')
return response.choices[0].message.content
The following example demonstrates how to call K2.5 API with video input:
import openai
import base64
import requests
def chat_with_video(client: openai.OpenAI, model_name:str):
url = 'https://huggingface.co/moonshotai/Kimi-K2.5/resolve/main/figures/demo_video.mp4'
video_base64 = base64.b64encode(requests.get(url).content).decode()
messages = [
{
"role": "user",
"content": [
{"type": "text","text": "Describe the video in detail."},
{
"type": "video_url",
"video_url": {"url": f"data:video/mp4;base64,{video_base64}"},
},
],
}
]
response = client.chat.completions.create(model=model_name, messages=messages)
print('====== Below is reasoning_content in Thinking Mode ======')
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
print('====== Below is response in Thinking Mode ======')
print(f'response: {response.choices[0].message.content}')
# Also support instant mode if pass {"thinking" = {"type":"disabled"}}
response = client.chat.completions.create(
model=model_name,
messages=messages,
stream=False,
max_tokens=4096,
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
)
print('====== Below is response in Instant Mode ======')
print(f'response: {response.choices[0].message.content}')
return response.choices[0].message.content
K2.5 shares the same design of Interleaved Thinking and Multi-Step Tool Call as K2 Thinking. For usage example, please refer to the K2 Thinking documentation.
Kimi K2.5 works best with Kimi Code CLI as its agent framework — give it a try at https://www.kimi.com/code.
Both the code repository and the model weights are released under the Modified MIT License.
If you have any questions, please reach out at [email protected].
| Mode | chat |
| Context Window | 262K tokens |
| Max Output | 262K tokens |
| Function Calling | Supported |
| Vision | Supported |
| Reasoning | - |
| Web Search | - |
| Url Context | - |
| Architecture | KimiK25ForConditionalGeneration |
| Model Type | kimi_k25 |
| Library | transformers |
from openai import OpenAI
client = OpenAI(
base_url="https://api.haimaker.ai/v1",
api_key="YOUR_API_KEY",
)
response = client.chat.completions.create(
model="moonshotai/kimi-k2.5",
messages=[
{"role": "user", "content": "Hello, how are you?"}
],
)
print(response.choices[0].message.content)OpenAI-compatible endpoint. Start building in minutes.