Haimaker.ai Logo
Mistral AI logo

Ministral 3 8B Instruct 2512 GGUF

mistralai/ministral-8b-2512
Chatapache-2.0
Mistral AI|
Function CallingVision
|Released Oct 2025 · Updated Jan 2026
Context Window
262K
tokens
Max Output
262K
tokens
Input Price
$0.15
/1M tokens
Output Price
$0.15
/1M tokens

Overview

A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

Model Card

Ministral 3 8B Instruct 2512 GGUF

A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

This model includes different quantization levels of the instruct post-trained version in GGUF, fine-tuned for instruction tasks, making it ideal for chat and instruction based use cases.

The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 8B can even be deployed locally, capable of fitting in 12GB of VRAM in FP8, and less if further quantized.

Learn more in our blog post and paper.

Key Features

Ministral 3 8B consists of two main architectural components:
  • 8.4B Language Model
  • 0.4B Vision Encoder
The Ministral 3 8B Instruct model offers the following capabilities:
  • Vision: Enables the model to analyze images and provide insights based on visual content, in addition to text.
  • Multilingual: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
  • System Prompt: Maintains strong adherence and support for system prompts.
  • Agentic: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
  • Edge-Optimized: Delivers best-in-class performance at a small scale, deployable anywhere.
  • Apache 2.0 License: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
  • Large Context Window: Supports a 256k context window.

Recommended Settings

We recommend deploying with the following best practices:

  • System Prompt: Define a clear environment and use case, including guidance on how to effectively leverage tools in agentic systems.

  • Sampling Parameters: Use a temperature below 0.1 for daily-driver and production environments ; Higher temperatures may be explored for creative use cases - developers are encouraged to experiment with alternative settings.

  • Tools: Keep the set of tools well-defined and limit their number to the minimum required for the use case - Avoiding overloading the model with an excessive number of tools.

  • Vision: When deploying with vision capabilities, we recommend maintaining an aspect ratio close to 1:1 (width-to-height) for images. Avoiding the use of overly thin or wide images - crop them as needed to ensure optimal performance.


License

This model is licensed under the Apache 2.0 License.

You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.

Features & Capabilities

Modechat
Context Window262K tokens
Max Output262K tokens
Function CallingSupported
VisionSupported
Reasoning-
Web Search-
Url Context-

Technical Details

Base Modelmistralai/Ministral-3-8B-Instruct-2512
Languagesen, fr, es, de, it, pt, nl, zh, ja, ko, ar
Libraryvllm

API Usage

from openai import OpenAI

client = OpenAI(
    base_url="https://api.haimaker.ai/v1",
    api_key="YOUR_API_KEY",
)

response = client.chat.completions.create(
    model="mistralai/ministral-8b-2512",
    messages=[
        {"role": "user", "content": "Hello, how are you?"}
    ],
)

print(response.choices[0].message.content)

Use Ministral 3 8B Instruct 2512 GGUF with the haimaker API

OpenAI-compatible endpoint. Start building in minutes.

Get API Access

More from Mistral AI