Qwen3.5-abliterated-MAX-AIO-GGUF
1.5K
1
llama.cpp
by
prithivMLmods
Image Model
OTHER
9B params
New
1K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
21GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
9GB+ RAM
Code Examples
python
prithivMLmods/Qwen3.5-abliterated-MAX-AIO-GGUF (main)
+-- README.md (9.9 KB)
+-- .gitattributes (4.8 KB)
+-- config.json (32 B)
+-- Qwen3.5-0.8B-Unredacted-MAX-GGUF
| +-- Qwen3.5-0.8B-Unredacted-MAX.F32.gguf (2.8 GB)
| +-- Qwen3.5-0.8B-Unredacted-MAX.BF16.gguf (1.4 GB)
| +-- Qwen3.5-0.8B-Unredacted-MAX.F16.gguf (1.4 GB)
| +-- Qwen3.5-0.8B-Unredacted-MAX.Q8_0.gguf (774.2 MB)
| +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-f32.gguf (383.7 MB)
| +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-bf16.gguf (197.7 MB)
| +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-f16.gguf (197.7 MB)
| +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-q8_0.gguf (110.6 MB)
+-- Qwen3.5-2B-Unredacted-MAX-GGUF
| +-- Qwen3.5-2B-Unredacted-MAX.F32.gguf (7.0 GB)
| +-- Qwen3.5-2B-Unredacted-MAX.BF16.gguf (3.5 GB)
| +-- Qwen3.5-2B-Unredacted-MAX.F16.gguf (3.5 GB)
| +-- Qwen3.5-2B-Unredacted-MAX.Q8_0.gguf (1.9 GB)
| +-- Qwen3.5-2B-Unredacted-MAX.mmproj-f32.gguf (1.2 GB)
| +-- Qwen3.5-2B-Unredacted-MAX.mmproj-bf16.gguf (640.3 MB)
| +-- Qwen3.5-2B-Unredacted-MAX.mmproj-f16.gguf (640.3 MB)
| +-- Qwen3.5-2B-Unredacted-MAX.mmproj-q8_0.gguf (347.8 MB)
+-- Qwen3.5-4B-Unredacted-MAX-GGUF
| +-- Qwen3.5-4B-Unredacted-MAX.F32.gguf (15.7 GB)
| +-- Qwen3.5-4B-Unredacted-MAX.BF16.gguf (7.8 GB)
| +-- Qwen3.5-4B-Unredacted-MAX.F16.gguf (7.8 GB)
| +-- Qwen3.5-4B-Unredacted-MAX.Q8_0.gguf (4.2 GB)
| +-- Qwen3.5-4B-Unredacted-MAX.mmproj-f32.gguf (1.2 GB)
| +-- Qwen3.5-4B-Unredacted-MAX.mmproj-bf16.gguf (644.3 MB)
| +-- Qwen3.5-4B-Unredacted-MAX.mmproj-f16.gguf (644.3 MB)
| +-- Qwen3.5-4B-Unredacted-MAX.mmproj-q8_0.gguf (349.9 MB)
+-- Qwen3.5-9B-Unredacted-MAX-GGUF
+-- Qwen3.5-9B-Unredacted-MAX.F32.gguf (33.4 GB)
+-- Qwen3.5-9B-Unredacted-MAX.BF16.gguf (16.7 GB)
+-- Qwen3.5-9B-Unredacted-MAX.F16.gguf (16.7 GB)
+-- Qwen3.5-9B-Unredacted-MAX.Q8_0.gguf (8.9 GB)
+-- Qwen3.5-9B-Unredacted-MAX.mmproj-f32.gguf (1.7 GB)
+-- Qwen3.5-9B-Unredacted-MAX.mmproj-bf16.gguf (879.0 MB)
+-- Qwen3.5-9B-Unredacted-MAX.mmproj-f16.gguf (879.0 MB)
+-- Qwen3.5-9B-Unredacted-MAX.mmproj-q8_0.gguf (595.3 MB)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.