MiniMax-M2-GPTQMODEL-W4A16

36
by
avtc
Language Model
OTHER
2B params
New
36 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
5GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM

Code Examples

bashvllm
export VLLM_ATTENTION_BACKEND="FLASHINFER"
export TORCH_CUDA_ARCH_LIST="8.6"
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export VLLM_MARLIN_USE_ATOMIC_ADD=1
export SAFETENSORS_FAST_GPU=1

vllm serve avtc/MiniMax-M2-GPTQMODEL-W4A16 \
    -tp 8 \
    --port 8000 \
    --host 0.0.0.0 \
    --uvicorn-log-level info \
    --trust-remote-code \
    --gpu-memory-utilization 0.925 \
    --max-num-seqs 1 \
    --trust-remote-code \
    --dtype=float16 \
    --seed 1234 \
    --max-model-len 192500 \
    --tool-call-parser minimax_m2 \
    --reasoning-parser minimax_m2_append_think \
    --enable-auto-tool-choice \
    --enable-sleep-mode \
    --compilation-config '{"level": 3, "cudagraph_capture_sizes": [1], "cudagraph_mode": "PIECEWISE"}'

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.