Kimi-K2.5-BF16

1
1
license:mit
by
Sherpa
Image Model
OTHER
2.5B params
New
1 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
6GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
3GB+ RAM

Code Examples

Usagepythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoProcessor

model_path = "Sherpa/Kimi-K2.5-BF16"

# Load processor
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)

# Load model (requires significant VRAM)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

# Text-only example
messages = [{"role": "user", "content": "Explain quantum entanglement in simple terms."}]
inputs = processor.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(outputs[0], skip_special_tokens=True))
Multimodal (Vision + Text)python
from PIL import Image

# Load an image
image = Image.open("example.jpg")

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": "Describe this image in detail."}
        ]
    }
]

inputs = processor.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(outputs[0], skip_special_tokens=True))
Thinking Modepython
messages = [
    {
        "role": "user",
        "content": "Solve this step by step: If a train travels 120km in 2 hours, then speeds up by 50%, how long will it take to travel the next 180km?"
    }
]

# Enable thinking mode via system prompt or generation config
# Refer to original model documentation for specific thinking mode activation
Enable thinking mode via system prompt or generation configtext
Kimi-K2.5-BF16/
├── config.json                          # Model configuration
├── generation_config.json               # Generation settings
├── model.safetensors.index.json         # Weight index (3.45 MB)
├── model-00001-of-00029.safetensors     # Weight shards (~34GB each)
├── model-00002-of-00029.safetensors
├── ... (29 shards total)
├── model-00029-of-00029.safetensors
├── configuration_deepseek.py            # DeepSeek config class
├── configuration_kimi_k25.py            # Kimi K2.5 config class
├── kimi_k25_processor.py                # Processor implementation
├── kimi_k25_vision_processing.py        # Vision processing
├── media_utils.py                       # Media utilities
└── README.md                            # This file

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.