DeepSeek-V3.2-mxfp4
251
license:mit
by
amd
Other
OTHER
32B params
New
251 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
72GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
30GB+ RAM
Code Examples
Deploymenttextvllm
export VLLM_USE_V1=1
export SAFETENSORS_FAST_GPU=1
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_USE_AITER_MOE=1
export VLLM_ROCM_USE_AITER_FP8BMM=0
export VLLM_ROCM_USE_AITER_FP4BMM=0
model_path="/shareddata/deepseek-ai/DeepSeek-V3.2-mxfp4"
vllm serve $model_path \
--tensor-parallel-size 4 \
--data-parallel-size 1 \
--max-num-batched-tokens 32768 \
--trust-remote-code \
--no-enable-prefix-caching \
--disable-log-requests \
--kv-cache-dtype bfloat16 \
--gpu_memory_utilization 0.85 \
--compilation-config '{"cudagraph_mode": "FULL_AND_PIECEWISE"}' \
--block-size 1
lm_eval \
--model local-completions \
--tasks gsm8k \
--model_args model=/shareddata/deepseek-ai/DeepSeek-V3.2-mxfp4,base_url=http://127.0.0.1:8000/v1/completions \
--batch_size auto \Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.