Qwen3-Coder-Next-GGUF

1.6K
10
ik_llama.cpp
by
ubergarm
Language Model
OTHER
New
2K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

Q4_0 44.355 GiB (4.782 BPW)bash
#!/usr/bin/env bash

custom="
# 60 Repeating Layers [0-59]

## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_ba\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=q4_1
blk\..*\.ffn_(gate|up)_exps\.weight=q4_0

# Non-Repeating Layers
token_embd\.weight=q4_1
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

    #--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/imatrix-Qwen3-Coder-Next-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-512x2.5B-BF16-00001-of-00004.gguf \
    /mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-Q4_0.gguf \
    Q4_0 \
    128
Quick Startbashllama.cpp
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp

# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)

# Download Desired Quants
$ pip install huggingface_hub
$ hf download --local-dir ./ --include=smol-IQ2_XS/*.gguf ubergarm/Qwen3-Coder-Next-GGUF

# Full GPU offload
# For 2 or more GPUs keep an eye on `-sm graph` support:
# https://github.com/ikawrakow/ik_llama.cpp/pull/1292
CUDA_VISIBLE_DEVICES="0,1" \
./build/bin/llama-server \
  --model "$model" \
  --alias Qwen3-Coder-Next \
  -c 262144 \
  -fa on \
  -ger \
  --merge-qkv \
  -sm graph \
  -ngl 99 \
  -ub 2048 -b 2048 \
  --threads 1 \
  --host 127.0.0.1 \
  --port 8080 \
  --jinja \
  --no-mmap

# Hybrid CPU+GPU
# basically use --n-cpu-moe etc...
echo TODO

# CPU-Only
# Gated delta net CPU-only performance seems slower than other architechtures, ideally have at least 1x GPU for attn/kv-cache
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
    --model "$model"\
    --alias Qwen3-Coder-Next \
    --ctx-size 131072 \
    -ger \
    --merge-qkv \
    -ctk q8_0 -ctv q8_0 \
    -ub 4096 -b 4096 \
    --parallel 1 \
    --threads 96 \
    --threads-batch 128 \
    --numa numactl \
    --host 127.0.0.1 \
    --port 8080 \
    --no-mmap \
    --jinja

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.