Qwen3.5-35B-A3B-GGUF
7.0K
5
ik_llama.cpp
by
ubergarm
Language Model
OTHER
35B params
New
7K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
79GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
33GB+ RAM
Code Examples
60 Repeating Layers [0-59]bash
#!/usr/bin/env bash
custom="
# 60 Repeating Layers [0-59]
## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_alpha\.weight=f32
blk\..*\.ssm_beta\.weight=f32
blk\..*\.ssm_out\.weight=q8_0
# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_ks
# Non-Repeating Layers
token_embd\.weight=q8_0
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
#--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/imatrix-Qwen3.5-35B-A3B-BF16.dat \
/mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-BF16-00001-of-00002.gguf \
/mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-IQ4_KS.gguf \
IQ4_KS \
128Quick Startbashllama.cpp
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Download Desired Quants
$ pip install huggingface_hub
$ hf download --local-dir ./ --include=*IQ4_KS.gguf ubergarm/Qwen3.5-35B-A3B-GGUF
# Full GPU Offload
# NOTE: https://github.com/ikawrakow/ik_llama.cpp/pull/1198
./build/bin/llama-server \
--alias Qwen3.5-35B-A3B \
--model "$model" \
-c 131072 \
-ctk f16 -ctv q8_0 \
-fa on \
-cuda fa-offset=0 \
-ub 1024 -b 2048 \
--merge-qkv \
-muge \
-ngl 999 \
--no-mmap \
--parallel 1 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--jinja \
--ctx-checkpoints 8Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.