Qwen3-VL-32B-Thinking-4bit
239
1
32.0B
1 language
license:apache-2.0
by
mlx-community
Image Model
OTHER
32B params
New
239 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
72GB+ RAM
Mobile
Laptop
Server
Quick Summary
mlx-community/Qwen3-VL-32B-Thinking-4bit This model was converted to MLX format from [`Qwen/Qwen3-VL-32B-Thinking`]() using mlx-vlm version 0.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
30GB+ RAM
Code Examples
Use with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
pip install -U mlx-vlmUse with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Use with mlxbash
python -m mlx_vlm.generate --model mlx-community/Qwen3-VL-32B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.