Qwen3-VL-32B-Thinking-GGUF
8.7K
3
32.0B
Q4
license:apache-2.0
by
Qwen
Image Model
OTHER
32B params
New
9K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
72GB+ RAM
Mobile
Laptop
Server
Quick Summary
This repository provides GGUF-format weights for Qwen3-VL-32B-Thinking, split into two components: - Language model (LLM): FP16, Q80, Q4KM - Vision encoder (`m...
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
30GB+ RAM
Code Examples
VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)VLbash
export greedy='false'
export top_p=0.95
export top_k=20
export repetition_penalty=1.0
export presence_penalty=1.5
export temperature=1.0
export out_seq_length=32768 (for aime, lcb, and gpqa, it is recommended to set to 81920)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.