Qwen3.5-27B-GGUF

2.1K
7
ik_llama.cpp
by
ubergarm
Language Model
OTHER
27B params
New
2K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
61GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
26GB+ RAM

Code Examples

64 Repeating Layers [0-63]bash
#!/usr/bin/env bash

custom="
# 64 Repeating Layers [0-63]

## Gated Attention/Delta Net [Blended 0-63]
blk\..*\.attn_gate\.weight=q6_0
blk\..*\.attn_qkv\.weight=q6_0
blk\..*\.attn_output\.weight=q6_0
blk\..*\.attn_q\.weight=q6_0
blk\..*\.attn_k\.weight=q6_0
blk\..*\.attn_v\.weight=q6_0
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Dense Layers [0-63]
blk\.[0-4]\.ffn_down_exps\.weight=q6_0
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks

# Non-Repeating Layers
token_embd\.weight=q6_0
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

    #--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.5-27B-GGUF/imatrix-Qwen3.5-27B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.5-27B-GGUF/Qwen3.5-27B-BF16-00001-of-00002.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-27B-GGUF/Qwen3.5-27B-IQ5_KS.gguf \
    IQ5_KS \
    128

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.