moonshotai

16 models • 8 total models in database
Sort by:

Kimi-K2.5

1,555,418
2,186

Kimi-K2-Thinking

Kimi K2 Thinking is the latest, most capable version of open-source thinking model. Starting with Kimi K2, we built it as a thinking agent that reasons step-by-step while dynamically invoking tools. It sets a new state-of-the-art on Humanity's Last Exam (HLE), BrowseComp, and other benchmarks by dramatically scaling multi-step reasoning depth and maintaining stable tool-use across 200–300 sequential calls. At the same time, K2 Thinking is a native INT4 quantization model with 256k context window, achieving lossless reductions in inference latency and GPU memory usage. Key Features - Deep Thinking & Tool Orchestration: End-to-end trained to interleave chain-of-thought reasoning with function calls, enabling autonomous research, coding, and writing workflows that last hundreds of steps without drift. - Native INT4 Quantization: Quantization-Aware Training (QAT) is employed in post-training stage to achieve lossless 2x speed-up in low-latency mode. - Stable Long-Horizon Agency: Maintains coherent goal-directed behavior across up to 200–300 consecutive tool invocations, surpassing prior models that degrade after 30–50 steps. | | | |:---:|:---:| | Architecture | Mixture-of-Experts (MoE) | | Total Parameters | 1T | | Activated Parameters | 32B | | Number of Layers (Dense layer included) | 61 | | Number of Dense Layers | 1 | | Attention Hidden Dimension | 7168 | | MoE Hidden Dimension (per Expert) | 2048 | | Number of Attention Heads | 64 | | Number of Experts | 384 | | Selected Experts per Token | 8 | | Number of Shared Experts | 1 | | Vocabulary Size | 160K | | Context Length | 256K | | Attention Mechanism | MLA | | Activation Function | SwiGLU | Reasoning Tasks | Benchmark | Setting | K2 Thinking | GPT-5 (High) | Claude Sonnet 4.5 (Thinking) | K2 0905 | DeepSeek-V3.2 | Grok-4 | |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|:-------:| | HLE (Text-only) | no tools | 23.9 | 26.3 | 19.8 | 7.9 | 19.8 | 25.4 | | | w/ tools | 44.9 | 41.7 | 32.0 | 21.7 | 20.3 | 41.0 | | | heavy | 51.0 | 42.0 | - | - | - | 50.7 | | AIME25 | no tools | 94.5 | 94.6 | 87.0 | 51.0 | 89.3 | 91.7 | | | w/ python | 99.1 | 99.6 | 100.0 | 75.2 | 58.1 | 98.8 | | | heavy | 100.0 | 100.0 | - | - | - | 100.0 | | HMMT25 | no tools | 89.4 | 93.3 | 74.6 | 38.8 | 83.6 | 90.0 | | | w/ python | 95.1 | 96.7 | 88.8 | 70.4 | 49.5 | 93.9 | | | heavy | 97.5 | 100.0 | - | - | - | 96.7 | | IMO-AnswerBench | no tools | 78.6 | 76.0 | 65.9 | 45.8 | 76.0 | 73.1 | | GPQA | no tools | 84.5 | 85.7 | 83.4 | 74.2 | 79.9 | 87.5 | General Tasks | Benchmark | Setting | K2 Thinking | GPT-5 (High) | Claude Sonnet 4.5 (Thinking) | K2 0905 | DeepSeek-V3.2 | |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:| | MMLU-Pro | no tools | 84.6 | 87.1 | 87.5 | 81.9 | 85.0 | | MMLU-Redux | no tools | 94.4 | 95.3 | 95.6 | 92.7 | 93.7 | | Longform Writing | no tools | 73.8 | 71.4 | 79.8 | 62.8 | 72.5 | | HealthBench | no tools | 58.0 | 67.2 | 44.2 | 43.8 | 46.9 | Agentic Search Tasks | Benchmark | Setting | K2 Thinking | GPT-5 (High) | Claude Sonnet 4.5 (Thinking) | K2 0905 | DeepSeek-V3.2 | |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:| | BrowseComp | w/ tools | 60.2 | 54.9 | 24.1 | 7.4 | 40.1 | | BrowseComp-ZH | w/ tools | 62.3 | 63.0 | 42.4 | 22.2 | 47.9 | | Seal-0 | w/ tools | 56.3 | 51.4 | 53.4 | 25.2 | 38.5 | | FinSearchComp-T3 | w/ tools | 47.4 | 48.5 | 44.0 | 10.4 | 27.0 | | Frames | w/ tools | 87.0 | 86.0 | 85.0 | 58.1 | 80.2 | Coding Tasks | Benchmark | Setting | K2 Thinking | GPT-5 (High) | Claude Sonnet 4.5 (Thinking) | K2 0905 | DeepSeek-V3.2 | |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:| | SWE-bench Verified | w/ tools | 71.3 | 74.9 | 77.2 | 69.2 | 67.8 | | SWE-bench Multilingual | w/ tools | 61.1 | 55.3 | 68.0 | 55.9 | 57.9 | | Multi-SWE-bench | w/ tools | 41.9 | 39.3 | 44.3 | 33.5 | 30.6 | | SciCode | no tools | 44.8 | 42.9 | 44.7 | 30.7 | 37.7 | | LiveCodeBenchV6 | no tools | 83.1 | 87.0 | 64.0 | 56.1 | 74.1 | | OJ-Bench (cpp) | no tools | 48.7 | 56.2 | 30.4 | 25.5 | 38.2 | | Terminal-Bench | w/ simulated tools (JSON) | 47.1 | 43.8 | 51.0 | 44.5 | 37.7 | 1. To ensure a fast, lightweight experience, we selectively employ a subset of tools and reduce the number of tool call steps under the chat mode on kimi.com. As a result, chatting on kimi.com may not reproduce our benchmark scores. Our agentic mode will be updated soon to reflect the full capabilities of K2 Thinking. 2. Testing Details: 2.1. All benchmarks were evaluated at temperature = 1.0 and 256 k context length for K2 Thinking, except for SciCode, for which we followed the official temperature setting of 0.0. 2.2. HLE (no tools), AIME25, HMMT25, and GPQA were capped at a 96k thinking-token budget, while IMO-Answer Bench, LiveCodeBench and OJ-Bench were capped at a 128k thinking-token budget. Longform Writing was capped at a 32k completion-token budget. 2.3. For AIME and HMMT (no tools), we report the average of 32 runs (avg@32). For AIME and HMMT (with Python), we report the average of 16 runs (avg@16). For IMO-AnswerBench, we report the average of 8 runs (avg@8). 3. Baselines: 3.1 GPT-5, Claude-4.5-sonnet, Grok-4 results and DeepSeek-V3.2 results are quoted from the GPT-5 post, GPT-5 for Developers post, GPT-5 system card, claude-sonnet-4-5 post, grok-4 post, deepseek-v3.2 post, the public Terminal-Bench leaderboard (Terminus-2), the public Vals AI leaderboard and artificialanalysis. Benchmarks for which no available public scores were re-tested under the same conditions used for k2 thinking and are marked with an asterisk(). For the GPT-5 test, we set the reasoning effort to high. 3.2 The GPT-5 and Grok-4 on the HLE full set with tools are 35.2 and 38.6 from the official posts. In our internal evaluation on the HLE text-only subset, GPT-5 scores 41.7 and Grok-4 scores 38.6 (Grok-4’s launch cited 41.0 on the text-only subset). For GPT-5's HLE text-only w/o tool, we use score from Scale.ai . The official GPT5 HLE full set w/o tool is 24.8. 3.3 For IMO-AnswerBench : GPT-5 scored 65.6 in the benchmark paper. We re-evaluated GPT-5 with official API and obtained a score of 76. 4. For HLE (w/ tools) and the agentic-search benchmarks: 4.1. K2 Thinking was equipped with search, code-interpreter, and web-browsing tools. 4.2. BrowseComp-ZH, Seal-0 and FinSearchComp-T3 were run 4 times independently and the average is reported (avg@4). 4.3. The evaluation used o3-mini as judge, configured identically to the official HLE setting; judge prompts were taken verbatim from the official repository. 4.4. On HLE, the maximum step limit was 120, with a 48 k-token reasoning budget per step; on agentic-search tasks, the limit was 300 steps with a 24 k-token reasoning budget per step. 4.5. When tool execution results cause the accumulated input to exceed the model's context limit (256k), we employ a simple context management strategy that hides all previous tool outputs. 4.6. The web access to Hugging Face may lead to data leakage in certain benchmark tests, such as HLE. K2 Thinking can achieve a score of 51.3 on HLE without blocking Hugging Face. To ensure a fair and rigorous comparison, we blocked access to Hugging Face during testing. 5. For Coding Tasks: 5.1. Terminal-Bench scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser. 5.2. For other coding tasks, the result was produced with our in-house evaluation harness. The harness is derived from SWE-agent, but we clamp the context windows of the Bash and Edit tools and rewrite the system prompt to match the task semantics. 5.3. All reported scores of coding tasks are averaged over 5 independent runs. 6. Heavy Mode: K2 Thinking Heavy Mode employs an efficient parallel strategy: it first rolls out eight trajectories simultaneously, then reflectively aggregates all outputs to generate the final result. Heavy mode for GPT-5 denotes the official GPT-5 Pro score. Low-bit quantization is an effective way to reduce inference latency and GPU memory usage on large-scale inference servers. However, thinking models use excessive decoding lengths, and thus quantization often results in substantial performance drops. To overcome this challenge, we adopt Quantization-Aware Training (QAT) during the post-training phase, applying INT4 weight-only quantization to the MoE components. It allows K2 Thinking to support native INT4 inference with a roughly 2x generation speed improvement while achieving state-of-the-art performance. All benchmark results are reported under INT4 precision. The checkpoints are saved in compressed-tensors format, supported by most of mainstream inference engine. If you need the checkpoints in higher precision such as FP8 or BF16, you can refer to official repo of compressed-tensors to unpack the int4 weights and convert to any higher precision. 5. Deployment > [!Note] > You can access K2 Thinking's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you. Currently, Kimi-K2-Thinking is recommended to run on the following inference engines: Deployment examples can be found in the Model Deployment Guide. Once the local inference service is up, you can interact with it through the chat endpoint: > [!NOTE] > The recommended temperature for Kimi-K2-Thinking is `temperature = 1.0`. > If no special instructions are required, the system prompt above is a good default. Kimi-K2-Thinking has the same tool calling settings as Kimi-K2-Instruct. To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them. The following example demonstrates calling a weather tool end-to-end: The `toolcallwithclient` function implements the pipeline from user query to tool execution. This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic. For more information, see the Tool Calling Guide. Both the code repository and the model weights are released under the Modified MIT License. If you have any questions, please reach out at [email protected].

300,023
1,643

Kimi-Linear-48B-A3B-Instruct

NaNK
license:mit
195,434
505

Kimi-VL-A3B-Instruct

NaNK
license:mit
177,073
250

Kimi-K2-Instruct

176,639
2,309

Kimi-VL-A3B-Thinking-2506

--- base_model: - moonshotai/Kimi-VL-A3B-Instruct license: mit pipeline_tag: image-text-to-text library_name: transformers ---

NaNK
license:mit
157,287
340

Kimi-VL-A3B-Thinking

NaNK
license:mit
69,525
445

Moonlight-16B-A3B

NaNK
license:mit
46,235
104

Moonlight-16B-A3B-Instruct

NaNK
license:mit
34,145
187

Kimi-K2-Base

18,389
287

Kimi-K2-Instruct-0905

📰   Tech Blog     |     📄   Paper Kimi K2-Instruct-0905 is the latest, most capable version of Kimi K2. It is a state-of-the-art mixture-of-experts (MoE) language model, featuring 32 billion activated parameters and a total of 1 trillion parameters. Key Features - Enhanced agentic coding intelligence: Kimi K2-Instruct-0905 demonstrates significant improvements in performance on public benchmarks and real-world coding agent tasks. - Improved frontend coding experience: Kimi K2-Instruct-0905 offers advancements in both the aesthetics and practicality of frontend programming. - Extended context length: Kimi K2-Instruct-0905’s context window has been increased from 128k to 256k tokens, providing better support for long-horizon tasks. | | | |:---:|:---:| | Architecture | Mixture-of-Experts (MoE) | | Total Parameters | 1T | | Activated Parameters | 32B | | Number of Layers (Dense layer included) | 61 | | Number of Dense Layers | 1 | | Attention Hidden Dimension | 7168 | | MoE Hidden Dimension (per Expert) | 2048 | | Number of Attention Heads | 64 | | Number of Experts | 384 | | Selected Experts per Token | 8 | | Number of Shared Experts | 1 | | Vocabulary Size | 160K | | Context Length | 256K | | Attention Mechanism | MLA | | Activation Function | SwiGLU | | Benchmark | Metric | K2-Instruct-0905 | K2-Instruct-0711 | Qwen3-Coder-480B-A35B-Instruct | GLM-4.5 | DeepSeek-V3.1 | Claude-Sonnet-4 | Claude-Opus-4 | |------------------------|--------|------------------|------------------|--------|--------|--------|-----------------|---------------| | SWE-Bench verified | ACC | 69.2 ± 0.63 | 65.8 | 69.6 | 64.2 | 66.0 | 72.7 | 72.5 | | SWE-Bench Multilingual | ACC | 55.9 ± 0.72 | 47.3 | 54.7 | 52.7 | 54.5 | 53.3 | - | | Multi-SWE-Bench | ACC | 33.5 ± 0.28 | 31.3 | 32.7 | 31.7 | 29.0 | 35.7 | - | | Terminal-Bench | ACC | 44.5 ± 2.03 | 37.5 | 37.5 | 39.9 | 31.3 | 36.4 | 43.2 | | SWE-Dev | ACC | 66.6 ± 0.72 | 61.9 | 64.7 | 63.2 | 53.3 | 67.1 | - | All K2-Instruct-0905 numbers are reported as mean ± std over five independent, full-test-set runs. Before each run we prune the repository so that every Git object unreachable from the target commit disappears; this guarantees the agent sees only the code that would legitimately be available at that point in history. Except for Terminal-Bench (Terminus-2), every result was produced with our in-house evaluation harness. The harness is derived from SWE-agent, but we clamp the context windows of the Bash and Edit tools and rewrite the system prompt to match the task semantics. All baseline figures denoted with an asterisk () are excerpted directly from their official report or public leaderboard; the remaining metrics were evaluated by us under conditions identical to those used for K2-Instruct-0905. For SWE-Dev we go one step further: we overwrite the original repository files and delete any test file that exercises the functions the agent is expected to generate, eliminating any indirect hints about the desired implementation. 4. Deployment > [!Note] > You can access Kimi K2's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you. > > The Anthropic-compatible API maps temperature by `realtemperature = requesttemperature 0.6` for better compatible with existing applications. Our model checkpoints are stored in the block-fp8 format, you can find it on Huggingface. Currently, Kimi-K2 is recommended to run on the following inference engines: Deployment examples for vLLM and SGLang can be found in the Model Deployment Guide. Once the local inference service is up, you can interact with it through the chat endpoint: > [!NOTE] > The recommended temperature for Kimi-K2-Instruct-0905 is `temperature = 0.6`. > If no special instructions are required, the system prompt above is a good default. Kimi-K2-Instruct-0905 has strong tool-calling capabilities. To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them. The following example demonstrates calling a weather tool end-to-end: The `toolcallwithclient` function implements the pipeline from user query to tool execution. This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic. For more information, see the Tool Calling Guide. Both the code repository and the model weights are released under the Modified MIT License. If you have any questions, please reach out at [email protected].

11,194
663

Kimi-Dev-72B

NaNK
license:mit
7,757
374

Kimi-Audio-7B

🤗 Kimi-Audio-7B | 🤗 Kimi-Audio-7B-Instruct | 📑 Paper We present Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation. This repository hosts the model checkpoints for Kimi-Audio-7B. Kimi-Audio is designed as a universal audio foundation model capable of handling a wide variety of audio processing tasks within a single unified framework. Key features include: Universal Capabilities: Handles diverse tasks like speech recognition (ASR), audio question answering (AQA), audio captioning (AAC), speech emotion recognition (SER), sound event/scene classification (SEC/ASC) and end-to-end speech conversation. State-of-the-Art Performance: Achieves SOTA results on numerous audio benchmarks (see our Technical Report). Large-Scale Pre-training: Pre-trained on over 13 million hours of diverse audio data (speech, music, sounds) and text data. Novel Architecture: Employs a hybrid audio input (continuous acoustic + discrete semantic tokens) and an LLM core with parallel heads for text and audio token generation. Efficient Inference: Features a chunk-wise streaming detokenizer based on flow matching for low-latency audio generation. For more details, please refer to our GitHub Repository and Technical Report. Kimi-Audio-7B is a base model without fine-tuning. So it cannot be used directly. The base model is quite flexible, you can fine-tune it on any possible downstream tasks. If you are looking for an out-of-the-box model, please refer to Kimi-Audio-7B-Instruct. If you find Kimi-Audio useful in your research or applications, please cite our technical report: The model is based and modified from Qwen 2.5-7B. Code derived from Qwen2.5-7B is licensed under the Apache 2.0 License. Other parts of the code are licensed under the MIT License.

NaNK
license:mit
2,196
69

Kimi-Linear-48B-A3B-Base

NaNK
license:mit
905
65

Kimi Audio 7B Instruct

NaNK
license:mit
537
361

MoonViT-SO-400M

license:mit
162
27