DeepSeek-V3.2-mtp-ptpc

7
license:mit
by
amd
Other
OTHER
32B params
New
7 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
72GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
30GB+ RAM

Code Examples

textvllm
export VLLM_USE_V1=1  
export SAFETENSORS_FAST_GPU=1
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_USE_AITER_MOE=1
model_path="/model_path/deepseek-ai/DeepSeek-V3.2-ptpc"
vllm serve $model_path \
  --tensor-parallel-size 8 \
  --data-parallel-size 1 \
  --max-num-batched-tokens 32768 \
  --trust-remote-code \
  --no-enable-prefix-caching \
  --disable-log-requests \
  --kv-cache-dtype bfloat16 \
  --gpu_memory_utilization 0.85 \
  --compilation-config '{"cudagraph_mode": "FULL_AND_PIECEWISE"}' \
  --block-size 1

lm_eval \
  --model local-completions \
  --tasks gsm8k \
  --model_args model=/model_path/deepseek-ai/DeepSeek-V3.2-ptpc,base_url=http://127.0.0.1:8000/v1/completions \
  --batch_size auto \
  --limit 400

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.