Kimi-K2.5-GGUF
3
ik_llama.cpp
by
ubergarm
Language Model
OTHER
2.5B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
6GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
3GB+ RAM
Code Examples
Balance of attn tensorsbash
#!/usr/bin/env bash
custom="
## Attention [0-60] (GPU)
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
# Balance of attn tensors
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
## First Single Dense Layer [0] (GPU)
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
## Shared Expert [1-60] (GPU)
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
## Routed Experts [1-60] (CPU)
## NOTE: imatrix is *only* applied to the iq3_k tensors for this recipe
blk\..*\.ffn_down_exps\.weight=q4_0
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_k
token_embd\.weight=iq6_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Kimi-K2.5-GGUF/imatrix-Kimi-K2.5-Q4_X.dat \
--include-weights ffn_gate_exps \
--include-weights ffn_up_exps \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-384x14B-BF16-00001-of-00046.gguf \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-IQ3_K.gguf \
IQ3_K \
128Quick Startbashllama.cpp
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA (or set GGML_CUDA=OFF for CPU only)
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Hybrid CPU+GPU
echo TODO
# Run CPU-Only on single NUMA node e.g. NPS1
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/Kimi-K2.5-GGUF \
--merge-qkv \
--ctx-size 131072 \
-ctk q8_0 \
-mla 3 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja \
--special \
--chat-template-file ./models/templates/Kimi-K2-Thinking.jinja
# --validate-quantsdiff
diff --git a/ggml/src/ggml-quants.c b/ggml/src/ggml-quants.c
index 20a9831b..05feef4f 100644
--- a/ggml/src/ggml-quants.c
+++ b/ggml/src/ggml-quants.c
@@ -689,7 +689,7 @@ void quantize_row_q4_0_ref(const float * restrict x, block_q4_0 * restrict y, in
}
}
- const float d = max / -8;
+ const float d = max / -7;
const float id = d ? 1.0f/d : 0.0f;
y[i].d = GGML_FP32_TO_FP16(d);Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.