Qwen3-30B-A3B-Thinking-2507

251.1K
315
262K
Long context
30.0B
license:apache-2.0
by
Qwen
Language Model
OTHER
30B params
Good
251K downloads
Production-ready
Edge AI:
Mobile
Laptop
Server
68GB+ RAM
Mobile
Laptop
Server
Quick Summary

--- library_name: transformers license: apache-2.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
28GB+ RAM

Code Examples

Step 2: Launch Model Serverbashvllm
pip install -U vllm \
    --torch-backend=auto \
    --extra-index-url https://wheels.vllm.ai/nightly
bashvllm
VLLM_ATTENTION_BACKEND=DUAL_CHUNK_FLASH_ATTN VLLM_USE_V1=0 \
vllm serve ./Qwen3-30B-A3B-Thinking-2507 \
  --tensor-parallel-size 4 \
  --max-model-len 1010000 \
  --enable-chunked-prefill \
  --max-num-batched-tokens 131072 \
  --enforce-eager \
  --max-num-seqs 1 \
  --gpu-memory-utilization 0.85 \
  --enable-reasoning --reasoning-parser deepseek_r1
Option 2: Using SGLangbash
git clone https://github.com/sgl-project/sglang.git
cd sglang
pip install -e "python[all]"
bash
python3 -m sglang.launch_server \
    --model-path ./Qwen3-30B-A3B-Thinking-2507 \
    --context-length 1010000 \
    --mem-frac 0.75 \
    --attention-backend dual_chunk_flash_attn \
    --tp 4 \
    --chunked-prefill-size 131072 \
    --reasoning-parser deepseek-r1

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.