Voxtral-Mini-3B-2507-FP8-dynamic

457
9
3.0B
8 languages
license:apache-2.0
by
RedHatAI
Audio Model
OTHER
3B params
New
457 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
7GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
3GB+ RAM

Code Examples

Deploymenttextvllm
vllm serve RedHatAI/Voxtral-Mini-3B-2507-FP8-dynamic --tokenizer_mode mistral --config_format mistral --load_format mistral
Creationpythontransformers
import torch
from transformers import VoxtralForConditionalGeneration, AutoProcessor
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier

# Select model and load it.
MODEL_ID = "mistralai/Voxtral-Mini-3B-2507"

model = VoxtralForConditionalGeneration.from_pretrained(MODEL_ID, torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained(MODEL_ID)

# Recipe
recipe = QuantizationModifier(
    targets="Linear", 
    scheme="FP8_DYNAMIC", 
    ignore=["language_model.lm_head", "re:audio_tower.*" ,"re:multi_modal_projector.*"],
)

# Apply algorithms.
oneshot(
    model=model,
    recipe=recipe,
    processor=processor,
)

SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-FP8-dynamic"
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.