Kimi-K2-Thinking-GGUF

11.5K
19
1 language
Q4
ik_llama.cpp
by
ubergarm
Language Model
OTHER
Fair
11K downloads
Community-tested
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

imatrix Quantization of moonshotai/Kimi-K2-Thinking UPDATE: The `smol-IQ3KS` scored 77.

Code Examples

Q4_0 (patched) routed experts approximating original QAT designbash
#!/usr/bin/env bash

# Q4_0 (patched) routed experts approximating original QAT design
# Q8_0 everything else

custom="
## Attention [0-60] (GPU)
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0

# Balance of attn tensors
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0

## First Single Dense Layer [0] (GPU)
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0

## Shared Expert [1-60] (GPU)
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

## Routed Experts [1-60] (CPU)
blk\..*\.ffn_down_exps\.weight=q4_0
blk\..*\.ffn_(gate|up)_exps\.weight=q4_0

token_embd\.weight=q8_0
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    /mnt/data/models/ubergarm/Kimi-K2-Thinking-GGUF/-384x14B-BF16-00001-of-00046.gguf \
    /mnt/data/models/ubergarm/Kimi-K2-Thinking-GGUF/Kimi-K2-Thinking-Q8_0-Q4_0.gguf \
    Q8_0 \
    128
smol-IQ4_KSS 485.008 GiB (4.059 BPW)bash
#!/usr/bin/env bash

custom="
## Attention [0-60] (GPU)
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0

# Balance of attn tensors
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0

## First Single Dense Layer [0] (GPU)
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0

## Shared Expert [1-60] (GPU)
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

## Routed Experts [1-60] (CPU)
blk\..*\.ffn_down_exps\.weight=iq4_kss
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss

token_embd\.weight=iq6_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Kimi-K2-Thinking-GGUF/imatrix-Kimi-K2-Thinking-Q8_0-Q4_0.dat \
    /mnt/data/models/ubergarm/Kimi-K2-Thinking-GGUF/-384x14B-BF16-00001-of-00046.gguf \
    /mnt/data/models/ubergarm/Kimi-K2-Thinking-GGUF/Kimi-K2-Thinking-smol-IQ4_KSS.gguf \
    IQ4_KSS \
    128
Example running Hybrid CPU+GPU(s) on ik_llama.cppbashllama.cpp
# Example running Hybrid CPU+GPU(s) on ik_llama.cpp
./build/bin/llama-server \
    --model "$model"\
    --alias ubergarm/Kimi-K2-Thinking-GGUF \
    --ctx-size 32768 \
    -ctk q8_0 \
    -mla 3 \
    -ngl 99 \
    -ot "blk\.(1|2|3)\.ffn_.*=CUDA0" \
    -ot "blk\.(4|5|6)\.ffn_.*=CUDA1" \
    -ot exps=CPU \
    --parallel 1 \
    --threads 96 \
    --threads-batch 128 \
    --host 127.0.0.1 \
    --port 8080 \
    --no-mmap \
    --jinja \
    --chat-template-file updatedChatTemplate.jinja \
    --special

# Example running mainline llama.cpp
# remove `-mla 3` from commands and you should be :gucci:

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.