openbmb
MiniCPM-o-2_6
--- pipeline_tag: any-to-any datasets: - openbmb/RLAIF-V-Dataset library_name: transformers language: - multilingual tags: - minicpm-o - omni - vision - ocr - multi-image - video - custom_code - audio - speech - voice cloning - live Streaming - realtime speech conversation - asr - tts license: apache-2.0 ---
MiniCPM-o-4_5
MiniCPM-V-2_6
--- pipeline_tag: image-text-to-text datasets: - openbmb/RLAIF-V-Dataset library_name: transformers language: - multilingual tags: - minicpm-v - vision - ocr - multi-image - video - custom_code ---
MiniCPM-V-4
A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone MiniCPM-V 4.0 is the latest efficient model in the MiniCPM-V series. The model is built based on SigLIP2-400M and MiniCPM4-3B with a total of 4.1B parameters. It inherits the strong single-image, multi-image and video understanding performance of MiniCPM-V 2.6 with largely improved efficiency. Notable features of MiniCPM-V 4.0 include: - 🔥 Leading Visual Capability. With only 4.1B parameters, MiniCPM-V 4.0 achieves an average score of 69.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks, outperforming GPT-4.1-mini-20250414, MiniCPM-V 2.6 (8.1B params, OpenCompass 65.2) and Qwen2.5-VL-3B-Instruct (3.8B params, OpenCompass 64.5). It also shows good performance in multi-image understanding and video understanding. - 🚀 Superior Efficiency. Designed for on-device deployment, MiniCPM-V 4.0 runs smoothly on end devices. For example, it devlivers less than 2s first token delay and more than 17 token/s decoding on iPhone 16 Pro Max, without heating problems. It also shows superior throughput under concurrent requests. - 💫 Easy Usage. MiniCPM-V 4.0 can be easily used in various ways including llama.cpp, Ollama, vLLM, SGLang, LLaMA-Factory and local web demo etc. We also open-source iOS App that can run on iPhone and iPad. Get started easily with our well-structured Cookbook, featuring detailed instructions and practical examples. model Size Opencompass OCRBench MathVista HallusionBench MMMU MMVet MMBench V1.1 MMStar AI2D GPT-4v-20240409 - 63.5 656 55.2 43.9 61.7 67.5 79.8 56.0 78.6 Gemini-1.5-Pro - 64.5 754 58.3 45.6 60.6 64.0 73.9 59.1 79.1 GPT-4.1-mini-20250414 - 68.9 840 70.9 49.3 55.0 74.3 80.9 60.9 76.0 Claude 3.5 Sonnet-20241022 - 70.6 798 65.3 55.5 66.4 70.1 81.7 65.1 81.2 Qwen2.5-VL-3B-Instruct 3.8B 64.5 828 61.2 46.6 51.2 60.0 76.8 56.3 81.4 InternVL2.5-4B 3.7B 65.1 820 60.8 46.6 51.8 61.5 78.2 58.7 81.4 Qwen2.5-VL-7B-Instruct 8.3B 70.9 888 68.1 51.9 58.0 69.7 82.2 64.1 84.3 InternVL2.5-8B 8.1B 68.1 821 64.5 49.0 56.2 62.8 82.5 63.2 84.6 MiniCPM-V-2.6 8.1B 65.2 852 60.8 48.1 49.8 60.0 78.0 57.5 82.1 MiniCPM-o-2.6 8.7B 70.2 889 73.3 51.1 50.9 67.2 80.6 63.3 86.1 MiniCPM-V-4.0 4.1B 69.0 894 66.9 50.8 51.2 68.0 79.7 62.8 82.9 Click to view single image results on ChartQA, MME, RealWorldQA, TextVQA, DocVQA, MathVision, DynaMath, WeMath, Object HalBench and MM Halbench. model Size ChartQA MME RealWorldQA TextVQA DocVQA MathVision DynaMath WeMath Obj Hal MM Hal GPT-4v-20240409 - 78.5 1927 61.4 78.0 88.4 - - - - - - - Gemini-1.5-Pro - 87.2 - 67.5 78.8 93.1 41.0 31.5 50.5 - - - - GPT-4.1-mini-20250414 - - - - - - 45.3 47.7 - - - - - Claude 3.5 Sonnet-20241022 - 90.8 - 60.1 74.1 95.2 35.6 35.7 44.0 - - - - Qwen2.5-VL-3B-Instruct 3.8B 84.0 2157 65.4 79.3 93.9 21.9 13.2 22.9 18.3 10.8 3.9 33.3 InternVL2.5-4B 3.7B 84.0 2338 64.3 76.8 91.6 18.4 15.2 21.2 13.7 8.7 3.2 46.5 Qwen2.5-VL-7B-Instruct 8.3B 87.3 2347 68.5 84.9 95.7 25.4 21.8 36.2 13.3 7.9 4.1 31.6 InternVL2.5-8B 8.1B 84.8 2344 70.1 79.1 93.0 17.0 9.4 23.5 18.3 11.6 3.6 37.2 MiniCPM-V-2.6 8.1B 79.4 2348 65.0 80.1 90.8 17.5 9.0 20.4 7.3 4.7 4.0 29.9 MiniCPM-o-2.6 8.7B 86.9 2372 68.1 82.0 93.5 21.7 10.4 25.2 6.3 3.4 4.1 31.3 MiniCPM-V-4.0 4.1B 84.4 2298 68.5 80.8 92.9 20.7 14.2 32.7 6.3 3.5 4.1 29.2 Click to view multi-image and video understanding results on Mantis, Blink and Video-MME. License Model License The MiniCPM-o/V model weights and code are open-sourced under the Apache-2.0 license. To help us better understand and support our users, we would deeply appreciate it if you could consider optionally filling out a brief registration "questionnaire". Statement As an LMM, MiniCPM-V 4.0 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 4.0 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team: If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
MiniCPM-V-4_5
A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone MiniCPM-V 4.5 is the latest and most capable model in the MiniCPM-V series. The model is built on Qw...
MiniCPM-Llama3-V-2_5
[2025.01.14] 🔥🔥 🔥 We open source MiniCPM-o 2.6, with significant performance improvement over MiniCPM-V 2.6, and support real-time speech-to-speech conversation and multimodal live streaming. Try it now. [2024.08.10] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 is now fully supported by official llama.cpp! GGUF models of various sizes are available here. [2024.08.06] 🔥🔥🔥 We open-source MiniCPM-V 2.6, which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now! [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See here. [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See here. [2024.05.28] 💫 We now support LoRA fine-tuning for MiniCPM-Llama3-V 2.5, using only 2 V100 GPUs! See more statistics here. [2024.05.23] 🔥🔥🔥 MiniCPM-V tops GitHub Trending and HuggingFace Trending! Our demo, recommended by Hugging Face Gradio’s official account, is available here. Come and try it out! [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first end-side MLLM achieving GPT-4V level performance! We provide efficient inference and simple fine-tuning. Try it now! MiniCPM-Llama3-V 2.5 is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.0. Notable features of MiniCPM-Llama3-V 2.5 include: - 🔥 Leading Performance. MiniCPM-Llama3-V 2.5 has achieved an average score of 65.1 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. With only 8B parameters, it surpasses widely used proprietary models like GPT-4V-1106, Gemini Pro, Claude 3 and Qwen-VL-Max and greatly outperforms other Llama 3-based MLLMs. - 💪 Strong OCR Capabilities. MiniCPM-Llama3-V 2.5 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), achieving an 700+ score on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V-0409, Qwen-VL-Max and Gemini Pro. Based on recent user feedback, MiniCPM-Llama3-V 2.5 has now enhanced full-text OCR extraction, table-to-markdown conversion, and other high-utility capabilities, and has further strengthened its instruction-following and complex reasoning abilities, enhancing multimodal interaction experiences. - 🏆 Trustworthy Behavior. Leveraging the latest RLAIF-V method (the newest technology in the RLHF-V [CVPR'24] series), MiniCPM-Llama3-V 2.5 exhibits more trustworthy behavior. It achieves 10.3% hallucination rate on Object HalBench, lower than GPT-4V-1106 (13.6%), achieving the best-level performance within the open-source community. Data released. - 🌏 Multilingual Support. Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from VisCPM, MiniCPM-Llama3-V 2.5 extends its bilingual (Chinese-English) multimodal capabilities to over 30 languages including German, French, Spanish, Italian, Korean, Japanese etc. All Supported Languages. - 🚀 Efficient Deployment. MiniCPM-Llama3-V 2.5 systematically employs model quantization, CPU optimizations, NPU optimizations and compilation optimizations, achieving high-efficiency deployment on edge devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a 150-fold acceleration in multimodal large model end-side image encoding and a 3-fold increase in language decoding speed. - 💫 Easy Usage. MiniCPM-Llama3-V 2.5 can be easily used in various ways: (1) llama.cpp and ollama support for efficient CPU inference on local devices, (2) GGUF format quantized models in 16 sizes, (3) efficient LoRA fine-tuning with only 2 V100 GPUs, (4) streaming output, (5) quick local WebUI demo setup with Gradio and Streamlit, and (6) interactive demos on HuggingFace Spaces. Results on TextVQA, DocVQA, OCRBench, OpenCompass MultiModal Avg , MME, MMBench, MMMU, MathVista, LLaVA Bench, RealWorld QA, Object HalBench. We deploy MiniCPM-Llama3-V 2.5 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition. Demo Click here to try out the Demo of MiniCPM-Llama3-V 2.5. Usage Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10: Inference with llama.cpp MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of llama.cpp for more detail. Int4 quantized version Download the int4 quantized version for lower GPU memory (8GB) usage: MiniCPM-Llama3-V-25-int4. MiniCPM-V 2.0 Please see the info about MiniCPM-V 2.0 here. License Model License The code in this repo is released under the Apache-2.0 License. The usage of MiniCPM-V series model weights must strictly follow MiniCPM Model License.md. The models and weights of MiniCPM are completely free for academic research. after filling out a "questionnaire" for registration, are also available for free commercial use. Statement As an LLM, MiniCPM-Llama3-V 2.5 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-Llama3-V 2.5 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the MinCPM-V open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team: If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
MiniCPM-Embedding
MiniCPM-Embedding 是面壁智能与清华大学自然语言处理实验室(THUNLP)、东北大学信息检索小组(NEUIR)共同开发的中英双语言文本嵌入模型,有如下特点: - 出色的中文、英文检索能力。 - 出色的中英跨语言检索能力。 MiniCPM-Embedding 基于 MiniCPM-2B-sft-bf16 训练,结构上采取双向注意力和 Weighted Mean Pooling [1]。采取多阶段训练方式,共使用包括开源数据、机造数据、闭源数据在内的约 600 万条训练数据。 - 检索模型:MiniCPM-Embedding - 重排模型:MiniCPM-Reranker - 面向 RAG 场景的 LoRA 插件:MiniCPM3-RAG-LoRA MiniCPM-Embedding is a bilingual & cross-lingual text embedding model developed by ModelBest Inc. , THUNLP and NEUIR , featuring: - Exceptional Chinese and English retrieval capabilities. - Outstanding cross-lingual retrieval capabilities between Chinese and English. MiniCPM-Embedding is trained based on MiniCPM-2B-sft-bf16 and incorporates bidirectional attention and Weighted Mean Pooling [1] in its architecture. The model underwent multi-stage training using approximately 6 million training examples, including open-source, synthetic, and proprietary data. We also invite you to explore the RAG toolkit series: - Retrieval Model: MiniCPM-Embedding - Re-ranking Model: MiniCPM-Reranker - LoRA Plugin for RAG scenarios: MiniCPM3-RAG-LoRA [1] Muennighoff, N. (2022). Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904. - Model Size: 2.4B - Embedding Dimension: 2304 - Max Input Tokens: 512 MiniCPM-Embedding supports query-side instructions in the following format: MiniCPM-Embedding also works in instruction-free mode in the following format: 我们在 BEIR 与 C-MTEB/Retrieval 上测试时使用的指令见 `instructions.json`,其他测试不使用指令。文档侧直接输入文档原文。 When running evaluation on BEIR and C-MTEB/Retrieval, we use instructions in `instructions.json`. For other evaluations, we do not use instructions. On the document side, we directly use the bare document as the input. | 模型 Model | C-MTEB/Retrieval (NDCG@10) | BEIR (NDCG@10) | |------------------------------|-------------------|---------------| | bge-large-zh-v1.5 | 70.46 | - | | gte-large-zh | 72.49 | - | | ZhihuiLLMEmbedding | 76.74 | | | bge-large-en-v1.5 | - | 54.29 | | gte-en-large-v1.5 | - | 57.91 | | NV-Retriever-v1 | - | 60.9 | | bge-en-icl | - | 62.16 | | NV-Embed-v2 | - | 62.65 | | me5-large | 63.66 | 51.43 | | bge-m3(Dense) | 65.43 | 48.82 | | gte-multilingual-base(Dense) | 71.95 | 51.08 | | gte-Qwen2-1.5B-instruct | 71.86 | 58.29 | | gte-Qwen2-7B-instruct | 76.03 | 60.25 | | bge-multilingual-gemma2 | 73.73 | 59.24 | | MiniCPM-Embedding | 76.76 | 58.56 | | MiniCPM-Embedding+MiniCPM-Reranker | 77.08 | 61.61 | | 模型 Model | MKQA En-ZhCN (Recall@20) | NeuCLIR22 (NDCG@10) | NeuCLIR23 (NDCG@10) | |------------------------------|--------------------|--------------------|--------------------| | me5-large | 44.3 | 9.01 | 25.33 | | bge-m3(Dense) | 66.4 | 30.49 | 41.09 | | gte-multilingual-base(Dense) | 68.2 | 39.46 | 45.86 | | gte-Qwen2-1.5B-instruct | 68.52 | 49.11 | 45.05 | | gte-Qwen2-7B-instruct | 68.27 | 49.14 | 49.6 | | MiniCPM-Embedding | 72.95 | 52.65 | 49.95 | | MiniCPM-Embedding+MiniCPM-Reranker | 74.33 | 53.21 | 54.12 | - 本仓库中代码依照 Apache-2.0 协议开源。 - MiniCPM-Embedding 模型权重的使用则需要遵循 MiniCPM 模型协议。 - MiniCPM-Embedding 模型权重对学术研究完全开放。如需将模型用于商业用途,请填写此问卷。 The code in this repo is released under the Apache-2.0 License. The usage of MiniCPM-Embedding model weights must strictly follow MiniCPM Model License.md. The models and weights of MiniCPM-Embedding are completely free for academic research. After filling out a "questionnaire" for registration, MiniCPM-Embedding weights are also available for free commercial use.
MiniCPM-V-2
News [2025.01.14] 🔥 We open source MiniCPM-o 2.6, with significant performance improvement over MiniCPM-V 2.6, and support real-time speech-to-speech conversation and multimodal live streaming. Try it now. [2024.08.06] 🔥 We open-source MiniCPM-V 2.6, which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. [2024.05.20] 🔥 The GPT-4V level multimodal model MiniCPM-Llama3-V 2.5 is out. [2024.04.23] MiniCPM-V 2.0 supports vLLM now! [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at here! [2024.04.17] MiniCPM-V 2.0 supports deploying WebUI Demo now! [2024.04.15] MiniCPM-V 2.0 supports fine-tuning with the SWIFT framework! [2024.04.12] We open-source MiniCPM-V-2.0, which achieves comparable performance with Gemini Pro in understanding scene text and outperforms strong Qwen-VL-Chat 9.6B and Yi-VL 34B on OpenCompass , a comprehensive evaluation over 11 popular benchmarks. Click here to view the MiniCPM-V 2.0 technical blog. MiniCPM-V 2.8B is a strong multimodal large language model for efficient end-side deployment. The model is built based on SigLip-400M and MiniCPM-2.4B, connected by a perceiver resampler. Our latest version, MiniCPM-V 2.0 has several notable features. MiniCPM-V 2.0 achieves state-of-the-art performance on multiple benchmarks (including OCRBench, TextVQA, MME, MMB, MathVista, etc) among models under 7B parameters. It even outperforms strong Qwen-VL-Chat 9.6B, CogVLM-Chat 17.4B, and Yi-VL 34B on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. Notably, MiniCPM-V 2.0 shows strong OCR capability, achieving comparable performance to Gemini Pro in scene-text understanding, and state-of-the-art performance on OCRBench among open-source models. LMMs are known for suffering from hallucination, often generating text not factually grounded in images. MiniCPM-V 2.0 is the first end-side LMM aligned via multimodal RLHF for trustworthy behavior (using the recent RLHF-V [CVPR'24] series technique). This allows the model to match GPT-4V in preventing hallucinations on Object HalBench. MiniCPM-V 2.0 can accept 1.8 million pixels (e.g., 1344x1344) images at any aspect ratio. This enables better perception of fine-grained visual information such as small objects and optical characters, which is achieved via a recent technique from LLaVA-UHD. MiniCPM-V 2.0 can be efficiently deployed on most GPU cards and personal computers, and even on end devices such as mobile phones. For visual encoding, we compress the image representations into much fewer tokens via a perceiver resampler. This allows MiniCPM-V 2.0 to operate with favorable memory cost and speed during inference even when dealing with high-resolution images. MiniCPM-V 2.0 supports strong bilingual multimodal capabilities in both English and Chinese. This is enabled by generalizing multimodal capabilities across languages, a technique from VisCPM [ICLR'24]. Results on TextVQA, DocVQA, OCRBench, OpenCompass, MME, MMBench, MMMU, MathVista, LLaVA Bench, Object HalBench. We deploy MiniCPM-V 2.0 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition. Demo Click here to try out the Demo of MiniCPM-V 2.0. Deployment on Mobile Phone MiniCPM-V 2.0 can be deployed on mobile phones with Android and Harmony operating systems. 🚀 Try it out here. Click to see how to inference with vLLM Because our pull request to vLLM is still waiting for reviewing, we fork this repository to build and test our vLLM demo. Here are the steps: Usage Inference using Huggingface transformers on Nivdia GPUs or Mac with MPS (Apple silicon or AMD GPUs). Requirements tested on python 3.10: MiniCPM-V 1.0 Please see the info about MiniCPM-V 1.0 here. License Model License The code in this repo is released under the Apache-2.0 License. The usage of MiniCPM-V series model weights must strictly follow MiniCPM Model License.md. The models and weights of MiniCPM are completely free for academic research. after filling out a "questionnaire" for registration, are also available for free commercial use. Statement As a LLM, MiniCPM-V 2.0 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 2.0 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the MinCPM-V open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. If you find our work helpful, please consider citing the following papers
MiniCPM-2B-sft-bf16
MiniCPM 技术报告 Technical Report | OmniLMM 多模态模型 Multi-modal Model | CPM-C 千亿模型试用 ~100B Model Trial MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。 - 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。 - 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。 - 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。 - 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。 - 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。 我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。 - 基于MiniCPM-2B的指令微调与人类偏好对MiniCPM-2B-SFT/DPO。 - 基于MiniCPM-2B的多模态模型MiniCPM-V,能力超越基于Phi-2的同参数级别多模态模型。 - MiniCPM-2B-SFT/DPO的Int4量化版MiniCPM-2B-SFT/DPO-Int4。 - 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,文本及多模态模型均可在手机端进行推理。 MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. - MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc. - After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench. - MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks. - MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones. - The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU. We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism. - SFT and DPO version based on MiniCPM-2B and human preference: MiniCPM-2B-SFT/DPO - The multi-modal model MiniCPM-V based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2 - The INT4 quantized version MiniCPM-2B-SFT/DPO-Int4 based on MiniCPM-2B-SFT/DPO - Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones. 注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。 Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now. - 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进; - 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息; - 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果; - 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。 - Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model. - To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models. - Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts. - Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability. | HuggingFace | ModelScope | WiseModel | |-------------|------------|-----------| |sft-bf16|sft-bf16|sft-bf16 |sft-fp32|sft-fp32|sft-fp32 |dpo-bf16|dpo-bf16|dpo-bf16 |dpo-fp16|dpo-fp16|dpo-fp16 |dpo-fp32|dpo-fp32|dpo-fp32 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码 注意:需要在`frompretrained`中明确指明模型的数据类型,否则会引起较大计算误差 Run the following code after install `transformers>=4.36.0` and `accelerate` Warning: It is necessary to specify the data type of the model clearly in 'frompretrained', otherwise large calculation errors will be caused 本仓库中代码依照 Apache-2.0 协议开源 MiniCPM 模型权重的使用则需要遵循 “通用模型许可协议-来源说明-宣传限制-商业授权”。 MiniCPM 模型权重对学术研究完全开放。 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。 This repository is released under the Apache-2.0 License. The usage of MiniCPM model weights must strictly follow the General Model License (GML). The models and weights of MiniCPM are completely free for academic research. If you intend to utilize the model for commercial purposes, please reach out to [email protected] to obtain the certificate of authorization. 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 As a language model, MiniCPM generates content by learning from a vast amount of text. However, it does not possess the ability to comprehend or express personal opinions or value judgments. Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. 如果觉得MiniCPM有助于您的工作,请考虑引用下列技术报告 Please cite our techinical report if you find our work valuable.
MiniCPM-o-4_5-gguf
MiniCPM-V-4_5-gguf
MiniCPM3-4B
MiniCPM-V
MiniCPM-V-4-AWQ
MiniCPM-o-4_5-awq
MiniCPM4-0.5B
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. ( Note: In vLLM's chat API, `addspecialtokens` is `False` by default. This means important special tokens—such as the beginning-of-sequence (BOS) token—will not be added automatically. To ensure the input prompt is correctly formatted for the model, you should explicitly set `extrabody={"addspecialtokens": True}`. Then you can use the chat interface by running the following code: Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. Comprehensive Evaluation MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories. Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
AgentCPM-Report
MiniCPM-V-2_6-gguf
MiniCPM-V-4_5-AWQ
A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone MiniCPM-V 4.5 is the latest and most capable model in the MiniCPM-V series. The model is built on Qwen3-8B and SigLIP2-400M with a total of 8B parameters. It exhibits a significant performance improvement over previous MiniCPM-V and MiniCPM-o models, and introduces new useful features. Notable features of MiniCPM-V 4.5 include: - 🔥 State-of-the-art Vision-Language Capability. MiniCPM-V 4.5 achieves an average score of 77.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-latest, Gemini-2.0 Pro, and strong open-source models like Qwen2.5-VL 72B for vision-language capabilities, making it the most performant MLLM under 30B parameters. - 🎬 Efficient High-FPS and Long Video Understanding. Powered by a new unified 3D-Resampler over images and videos, MiniCPM-V 4.5 can now achieve 96x compression rate for video tokens, where 6 448x448 video frames can be jointly compressed into 64 video tokens (normally 1,536 tokens for most MLLMs). This means that the model can perceive significantly more video frames without increasing the LLM inference cost. This brings state-of-the-art high-FPS (up to 10FPS) video understanding and long video understanding capabilities on Video-MME, LVBench, MLVU, MotionBench, FavorBench, etc., efficiently. - ⚙️ Controllable Hybrid Fast/Deep Thinking. MiniCPM-V 4.5 supports both fast thinking for efficient frequent usage with competitive performance, and deep thinking for more complex problem solving. To cover efficiency and performance trade-offs in different user scenarios, this fast/deep thinking mode can be switched in a highly controlled fashion. - 💪 Strong OCR, Document Parsing and Others. Based on LLaVA-UHD architecture, MiniCPM-V 4.5 can process high-resolution images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), using 4x less visual tokens than most MLLMs. The model achieves leading performance on OCRBench, surpassing proprietary models such as GPT-4o-latest and Gemini 2.5. It also achieves state-of-the-art performance for PDF document parsing capability on OmniDocBench among general MLLMs. Based on the latest RLAIF-V and VisCPM techniques, it features trustworthy behaviors, outperforming GPT-4o-latest on MMHal-Bench, and supports multilingual capabilities in more than 30 languages. - 💫 Easy Usage. MiniCPM-V 4.5 can be easily used in various ways: (1) llama.cpp and ollama support for efficient CPU inference on local devices, (2) int4, GGUF and AWQ format quantized models in 16 sizes, (3) SGLang and vLLM support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with Transformers and LLaMA-Factory, (5) quick local WebUI demo, (6) optimized local iOS app on iPhone and iPad, and (7) online web demo on server. See our Cookbook for full usages! - Architechture: Unified 3D-Resampler for High-density Video Compression. MiniCPM-V 4.5 introduces a 3D-Resampler that overcomes the performance-efficiency trade-off in video understanding. By grouping and jointly compressing up to 6 consecutive video frames into just 64 tokens (the same token count used for a single image in MiniCPM-V series), MiniCPM-V 4.5 achieves a 96× compression rate for video tokens. This allows the model to process more video frames without additional LLM computational cost, enabling high-FPS video and long video understanding. The architecture supports unified encoding for images, multi-image inputs, and videos, ensuring seamless capability and knowledge transfer. - Pre-training: Unified Learning for OCR and Knowledge from Documents. Existing MLLMs learn OCR capability and knowledge from documents in isolated training approaches. We observe that the essential difference between these two training approaches is the visibility of the text in images. By dynamically corrupting text regions in documents with varying noise levels and asking the model to reconstruct the text, the model learns to adaptively and properly switch between accurate text recognition (when text is visible) and multimodal context-based knowledge reasoning (when text is heavily obscured). This eliminates reliance on error-prone document parsers in knowledge learning from documents, and prevents hallucinations from over-augmented OCR data, resulting in top-tier OCR and multimodal knowledge performance with minimal engineering overhead. - Post-training: Hybrid Fast/Deep Thinking with Multimodal RL. MiniCPM-V 4.5 offers a balanced reasoning experience through two switchable modes: fast thinking for efficient daily use and deep thinking for complex tasks. Using a new hybrid reinforcement learning method, the model jointly optimizes both modes, significantly enhancing fast-mode performance without compromising deep-mode capability. Incorporated with RLPR and RLAIF-V, it generalizes robust reasoning skills from broad multimodal data while effectively reducing hallucinations. Model Size Avg Score ↑ Total Inference Time ↓ GPU Mem ↓ Both Video-MME and OpenCompass were evaluated using 8×A100 GPUs for inference. The reported inference time of Video-MME includes full model-side computation, and excludes the external cost of video frame extraction (dependent on specific frame extraction tools) for fair comparison. We deploy MiniCPM-V 4.5 on iPad M4 with iOS demo. The demo video is the raw screen recording without editing. Category Framework Cookbook Link Upstream PR Supported since(branch) Supported since(release) Edge(On-device) Llama.cpp Llama.cpp Doc #15575 (2025-08-26) master(2025-08-26) b6282 Ollama Ollama Doc #12078 (2025-08-26) Merging Waiting for official release Serving(Cloud) vLLM vLLM Doc #23586 (2025-08-26) main(2025-08-27) v0.10.2 SGLang SGLang Doc #9610 (2025-08-26) Merging Waiting for official release Finetuning LLaMA-Factory LLaMA-Factory Doc #9022 (2025-08-26) main(2025-08-26) Waiting for official release > Note: If you'd like us to prioritize support for another open-source framework, please let us know via this short form. If you wish to enable thinking mode, provide the argument `enablethinking=True` to the chat function. python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.frompretrained('openbmb/MiniCPM-V-45', trustremotecode=True, attnimplementation='sdpa', torchdtype=torch.bfloat16) # sdpa or flashattention2 model = model.eval().cuda() tokenizer = AutoTokenizer.frompretrained('openbmb/MiniCPM-V-45', trustremotecode=True) image1 = Image.open('image1.jpg').convert('RGB') image2 = Image.open('image2.jpg').convert('RGB') question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' msgs = [{'role': 'user', 'content': [image1, image2, question]}] answer = model.chat( msgs=msgs, tokenizer=tokenizer ) print(answer) python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.frompretrained('openbmb/MiniCPM-V-45', trustremotecode=True, attnimplementation='sdpa', torchdtype=torch.bfloat16) model = model.eval().cuda() tokenizer = AutoTokenizer.frompretrained('openbmb/MiniCPM-V-45', trustremotecode=True) question = "production date" image1 = Image.open('example1.jpg').convert('RGB') answer1 = "2023.08.04" image2 = Image.open('example2.jpg').convert('RGB') answer2 = "2007.04.24" imagetest = Image.open('test.jpg').convert('RGB') msgs = [ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, {'role': 'user', 'content': [imagetest, question]} ] answer = model.chat( msgs=msgs, tokenizer=tokenizer ) print(answer) bib @misc{yu2025minicpmv45cookingefficient, title={MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe}, author={Tianyu Yu and Zefan Wang and Chongyi Wang and Fuwei Huang and Wenshuo Ma and Zhihui He and Tianchi Cai and Weize Chen and Yuxiang Huang and Yuanqian Zhao and Bokai Xu and Junbo Cui and Yingjing Xu and Liqing Ruan and Luoyuan Zhang and Hanyu Liu and Jingkun Tang and Hongyuan Liu and Qining Guo and Wenhao Hu and Bingxiang He and Jie Zhou and Jie Cai and Ji Qi and Zonghao Guo and Chi Chen and Guoyang Zeng and Yuxuan Li and Ganqu Cui and Ning Ding and Xu Han and Yuan Yao and Zhiyuan Liu and Maosong Sun}, year={2025}, eprint={2509.18154}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2509.18154}, } @article{yao2024minicpm, title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone}, author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others}, journal={Nat Commun 16, 5509 (2025)}, year={2025} }
MiniCPM-V-2_6-int4
[2025.01.14] 🔥🔥 We open source MiniCPM-o 2.6, with significant performance improvement over MiniCPM-V 2.6, and support real-time speech-to-speech conversation and multimodal live streaming. Try it now. MiniCPM-V 2.6 int4 This is the int4 quantized version of MiniCPM-V 2.6. Running with int4 version would use lower GPU memory (about 7GB). Usage Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:
VoxCPM1.5
VoxCPM-0.5B
🎙️ VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning [](https://github.com/OpenBMB/VoxCPM/) [](https://huggingface.co/openbmb/VoxCPM-0.5B) [](https://hu...
MiniCPM-o-2_6-gguf
MiniCPM-1B-sft-bf16
MiniCPM-V-4_5-int4
MiniCPM-Llama3-V-2_5-gguf
VisRAG-Ret
MiniCPM-Embedding-Light
MiniCPM-V-4-gguf
A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone MiniCPM-V 4.0 is the latest efficient model in the MiniCPM-V series. The model is built based on SigLIP2-400M and MiniCPM4-3B with a total of 4.1B parameters. It inherits the strong single-image, multi-image and video understanding performance of MiniCPM-V 2.6 with largely improved efficiency. Notable features of MiniCPM-V 4.0 include: - 🔥 Leading Visual Capability. With only 4.1B parameters, MiniCPM-V 4.0 achieves an average score of 69.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks, outperforming GPT-4.1-mini-20250414, MiniCPM-V 2.6 (8.1B params, OpenCompass 65.2) and Qwen2.5-VL-3B-Instruct (3.8B params, OpenCompass 64.5). It also shows good performance in multi-image understanding and video understanding. - 🚀 Superior Efficiency. Designed for on-device deployment, MiniCPM-V 4.0 runs smoothly on end devices. For example, it devlivers less than 2s first token delay and more than 17 token/s decoding on iPhone 16 Pro Max, without heating problems. It also shows superior throughput under concurrent requests. - 💫 Easy Usage. MiniCPM-V 4.0 can be easily used in various ways including llama.cpp, Ollama, vLLM, SGLang, LLaMA-Factory and local web demo etc. We also open-source iOS App that can run on iPhone and iPad. Get started easily with our well-structured Cookbook, featuring detailed instructions and practical examples. model Size Opencompass OCRBench MathVista HallusionBench MMMU MMVet MMBench V1.1 MMStar AI2D GPT-4v-20240409 - 63.5 656 55.2 43.9 61.7 67.5 79.8 56.0 78.6 Gemini-1.5-Pro - 64.5 754 58.3 45.6 60.6 64.0 73.9 59.1 79.1 GPT-4.1-mini-20250414 - 68.9 840 70.9 49.3 55.0 74.3 80.9 60.9 76.0 Claude 3.5 Sonnet-20241022 - 70.6 798 65.3 55.5 66.4 70.1 81.7 65.1 81.2 Qwen2.5-VL-3B-Instruct 3.8B 64.5 828 61.2 46.6 51.2 60.0 76.8 56.3 81.4 InternVL2.5-4B 3.7B 65.1 820 60.8 46.6 51.8 61.5 78.2 58.7 81.4 Qwen2.5-VL-7B-Instruct 8.3B 70.9 888 68.1 51.9 58.0 69.7 82.2 64.1 84.3 InternVL2.5-8B 8.1B 68.1 821 64.5 49.0 56.2 62.8 82.5 63.2 84.6 MiniCPM-V-2.6 8.1B 65.2 852 60.8 48.1 49.8 60.0 78.0 57.5 82.1 MiniCPM-o-2.6 8.7B 70.2 889 73.3 51.1 50.9 67.2 80.6 63.3 86.1 MiniCPM-V-4.0 4.1B 69.0 894 66.9 50.8 51.2 68.0 79.7 62.8 82.9 Click to view single image results on ChartQA, MME, RealWorldQA, TextVQA, DocVQA, MathVision, DynaMath, WeMath, Object HalBench and MM Halbench. model Size ChartQA MME RealWorldQA TextVQA DocVQA MathVision DynaMath WeMath Obj Hal MM Hal GPT-4v-20240409 - 78.5 1927 61.4 78.0 88.4 - - - - - - - Gemini-1.5-Pro - 87.2 - 67.5 78.8 93.1 41.0 31.5 50.5 - - - - GPT-4.1-mini-20250414 - - - - - - 45.3 47.7 - - - - - Claude 3.5 Sonnet-20241022 - 90.8 - 60.1 74.1 95.2 35.6 35.7 44.0 - - - - Qwen2.5-VL-3B-Instruct 3.8B 84.0 2157 65.4 79.3 93.9 21.9 13.2 22.9 18.3 10.8 3.9 33.3 InternVL2.5-4B 3.7B 84.0 2338 64.3 76.8 91.6 18.4 15.2 21.2 13.7 8.7 3.2 46.5 Qwen2.5-VL-7B-Instruct 8.3B 87.3 2347 68.5 84.9 95.7 25.4 21.8 36.2 13.3 7.9 4.1 31.6 InternVL2.5-8B 8.1B 84.8 2344 70.1 79.1 93.0 17.0 9.4 23.5 18.3 11.6 3.6 37.2 MiniCPM-V-2.6 8.1B 79.4 2348 65.0 80.1 90.8 17.5 9.0 20.4 7.3 4.7 4.0 29.9 MiniCPM-o-2.6 8.7B 86.9 2372 68.1 82.0 93.5 21.7 10.4 25.2 6.3 3.4 4.1 31.3 MiniCPM-V-4.0 4.1B 84.4 2298 68.5 80.8 92.9 20.7 14.2 32.7 6.3 3.5 4.1 29.2 Click to view multi-image and video understanding results on Mantis, Blink and Video-MME. License Model License The code in this repo is released under the Apache-2.0 License. The usage of MiniCPM-V series model weights must strictly follow MiniCPM Model License.md. The models and weights of MiniCPM are completely free for academic research. After filling out a "questionnaire" for registration, MiniCPM-V 2.6 weights are also available for free commercial use. Statement As an LMM, MiniCPM-V 4.0 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 4.0 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team: If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
MiniCPM3-4B-GGUF
MiniCPM4-8B
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. ( Note: In vLLM's chat API, `addspecialtokens` is `False` by default. This means important special tokens—such as the beginning-of-sequence (BOS) token—will not be added automatically. To ensure the input prompt is correctly formatted for the model, you should explicitly set `extrabody={"addspecialtokens": True}`. Then you can use the chat interface by running the following code: Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. Comprehensive Evaluation MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories. Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
MiniCPM-MoE-8x2B
AgentCPM-Explore
MiniCPM4.1-8B-GPTQ
What's New - [2025.09.05] MiniCPM4.1 series are released! This series is a hybrid reasoning model, which can be used in both deep reasoning mode and non-reasoning mode. 🔥🔥🔥 - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 and MiniCPM4.1 Series MiniCPM4 and MiniCPM4.1 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4.1-8B: The latest version of MiniCPM4, with 8B parameters, support fusion thinking. - MiniCPM4.1-8B-GPTQ: MiniCPM4.1-8B in GPTQ format. ( Click to expand all MiniCPM4 series models - MiniCPM4-8B: The flagship model with 8B parameters, trained on 8T tokens - MiniCPM4-0.5B: Lightweight version with 0.5B parameters, trained on 1T tokens - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head with QAT for FRSpec, integrating speculation and quantization for ultra acceleration - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format for speculative inference - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format - BitCPM4-0.5B: Extreme ternary quantization of MiniCPM4-0.5B, achieving 90% bit width reduction - BitCPM4-1B: Extreme ternary quantization of MiniCPM3-1B, achieving 90% bit width reduction - MiniCPM4-Survey: Generates trustworthy, long-form survey papers from user queries - MiniCPM4-MCP: Integrates MCP tools to autonomously satisfy user requirements Introduction MiniCPM4 and MiniCPM4.1 are extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities
MiniCPM4.1-8B
What's New - [2025.09.29] InfLLM-V2 paper is released! We can train a sparse attention model with only 5B long-text tokens. 🔥🔥🔥 - [2025.09.05] MiniCPM4.1 series are released! This series is a hy...
MiniCPM-SALA
MiniCPM-Llama3-V-2_5-int4
AgentCPM-Report-GGUF
UltraLM-13b
UltraLM-65b
UltraLM-13b-v2.0
MiniCPM-Reranker
MiniCPM-V-4-int4
MiniCPM-o-2_6-int4
BitCPM4-0.5B
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. (<-- you are here) - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction BitCPM4 are ternary quantized models derived from the MiniCPM series models through quantization-aware training (QAT), achieving significant improvements in both training efficiency and model parameter efficiency. - Improvements of the training method - Searching hyperparameters with a wind-tunnel on a small model. - Using a two-stage training method: training in high-precision first and then QAT, making the best of the trained high-precision models and significantly reducing the computational resources required for the QAT phase. - High parameter efficiency - Achieving comparable performance to full-precision models of similar parameter models with a bit width of only 1.58 bits, demonstrating high parameter efficiency. Usage Inference with Transformers BitCPM4's parameters are stored in a fake-quantized format, which supports direct inference within the Huggingface framework. Evaluation Results BitCPM4's performance is comparable with other full-precision models in same model size. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
MiniCPM-2B-sft-fp32
cpm-bee-10b
MiniCPM4.1-8B-GGUF
What's New - [2025.09.05] MiniCPM4.1 series are released! This series is a hybrid reasoning model, which can be used in both deep reasoning mode and non-reasoning mode. 🔥🔥🔥 - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 and MiniCPM4.1 Series MiniCPM4 and MiniCPM4.1 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4.1-8B: The latest version of MiniCPM4, with 8B parameters, support fusion thinking. - MiniCPM4.1-8B-GPTQ: MiniCPM4.1-8B in GPTQ format. - MiniCPM4.1-8B-AutoAWQ: MiniCPM4.1-8B in AutoAWQ format. - MiniCPM-4.1-8B-Marlin: MiniCPM4.1-8B in Marlin format. - MiniCPM4.1-8B-GGUF: MiniCPM4.1-8B in GGUF format. ( Click to expand all MiniCPM4 series models - MiniCPM4-8B: The flagship model with 8B parameters, trained on 8T tokens - MiniCPM4-0.5B: Lightweight version with 0.5B parameters, trained on 1T tokens - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head with QAT for FRSpec, integrating speculation and quantization for ultra acceleration - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format for speculative inference - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format - BitCPM4-0.5B: Extreme ternary quantization of MiniCPM4-0.5B, achieving 90% bit width reduction - BitCPM4-1B: Extreme ternary quantization of MiniCPM3-1B, achieving 90% bit width reduction - MiniCPM4-Survey: Generates trustworthy, long-form survey papers from user queries - MiniCPM4-MCP: Integrates MCP tools to autonomously satisfy user requirements Introduction MiniCPM4 and MiniCPM4.1 are extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities
UltraRM-13b
MiniCPM4-8B-Eagle-vLLM
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. (<-- you are here) - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - FRSpec -- Lightweight Speculative Sampling: Achieves draft model acceleration through vocabulary pruning of draft model - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities Using Eagle Speculative Decoding with vLLM For now, you need to install the latest version of vLLM. Then you can use Eagle Speculative Decoding to inference MiniCPM4-8B with vLLM. Use `speculativeconfig` to set the draft model. We recommend using CPM.cu for the inference of MiniCPM4. CPM.cu is a CUDA inference framework developed by OpenBMB, which integrates efficient sparse, speculative sampling, and quantization techniques, fully leveraging the efficiency advantages of MiniCPM4. You can install CPM.cu by running the following command: MiniCPM4 natively supports context lengths of up to 32,768 tokens. To reproduce the long-text acceleration effect in the paper, we recommend using the LongRoPE factors that have been validated. Change the `ropescaling` field in the `config.json` file as the following to enable LongRoPE. After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace) For more details about CPM.cu, please refer to the repo CPM.cu. MiniCPM4-8B supports `InfLLM v2`, a sparse attention mechanism designed for efficient long-sequence inference. It requires the infllmv2cudaimpl library. You can install it by running the following command: To enable InfLLM v2, you need to add the `sparseconfig` field in `config.json`: These parameters control the behavior of InfLLM v2: `kernelsize` (default: 32): The size of semantic kernels. `kernelstride` (default: 16): The stride between adjacent kernels. `initblocks` (default: 1): The number of initial blocks that every query token attends to. This ensures attention to the beginning of the sequence. `blocksize` (default: 64): The block size for key-value blocks. `windowsize` (default: 2048): The size of the local sliding window. `topk` (default: 64): The specifies that each token computes attention with only the top-k most relevant key-value blocks. `usenope` (default: false): Whether to use the NOPE technique in block selection for improved performance. `denselen` (default: 8192): Since Sparse Attention offers limited benefits for short sequences, the model can use standard (dense) attention for shorter texts. The model will use dense attention for sequences with a token length below `denselen` and switch to sparse attention for sequences exceeding this length. Set this to `-1` to always use sparse attention regardless of sequence length. MiniCPM4 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques for effective handling of long texts. We have validated the model's performance on context lengths of up to 131,072 tokens by modifying the LongRoPE factor. You can apply the LongRoPE factor modification by modifying the model files. Specifically, in the `config.json` file, adjust the `ropescaling` fields. For now, you need to install our forked version of SGLang. You can start the inference server by running the following command: Then you can use the chat interface by running the following command: Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. Comprehensive Evaluation MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories. Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
cpm-ant-10b
AgentCPM-GUI
MiniCPM-2B-dpo-fp16
MiniCPM-Reranker-Light
NOSA-8B
OmniLMM-12B
cpm-bee-5b
BitCPM4-1B
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. (<-- you are here) - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction BitCPM4 are ternary quantized models derived from the MiniCPM series models through quantization-aware training (QAT), achieving significant improvements in both training efficiency and model parameter efficiency. - Improvements of the training method - Searching hyperparameters with a wind-tunnel on a small model. - Using a two-stage training method: training in high-precision first and then QAT, making the best of the trained high-precision models and significantly reducing the computational resources required for the QAT phase. - High parameter efficiency - Achieving comparable performance to full-precision models of similar parameter models with a bit width of only 1.58 bits, demonstrating high parameter efficiency. Usage Inference with Transformers BitCPM4's parameters are stored in a fake-quantized format, which supports direct inference within the Huggingface framework. Evaluation Results BitCPM4's performance is comparable with other full-precision models in same model size. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
MiniCPM-S-1B-sft
MiniCPM-V-2-gguf
cpm-bee-2b
cpm-bee-1b
BitCPM4-1B-GGUF
VoxCPM2
MiniCPM-2B-128k
MiniCPM-2B-dpo-bf16
BitCPM4-0.5B-GGUF
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. - BitCPM4-0.5B-GGUF: GGUF version of BitCPM4-0.5B. (<-- you are here) - BitCPM4-1B-GGUF: GGUF version of BitCPM4-1B. Introduction BitCPM4 are ternary quantized models derived from the MiniCPM series models through quantization-aware training (QAT), achieving significant improvements in both training efficiency and model parameter efficiency. - Improvements of the training method - Searching hyperparameters with a wind-tunnel on a small model. - Using a two-stage training method: training in high-precision first and then QAT, making the best of the trained high-precision models and significantly reducing the computational resources required for the QAT phase. - High parameter efficiency - Achieving comparable performance to full-precision models of similar parameter models with a bit width of only 1.58 bits, demonstrating high parameter efficiency. Evaluation Results BitCPM4's performance is comparable with other full-precision models in same model size. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
RLPR-Gemma2-2B-it
RLPR-Gemma2-2B-it is trained from Gemma2-2B-it with the RLPR framework, which eliminates reliance on external verifiers and is simple and generalizable for more domains. 💡 Verifier-Free Reasoning Enhancement: RLPR pioneers reinforcement learning for reasoning tasks by leveraging the LLM's intrinsic generation probability as a direct reward signal. This eliminates the need for external verifiers and specialized fine-tuning, offering broad applicability and effectively handling complex, diverse answers. 🛠️ Innovative Reward & Training Framework: Features a robust Probability-based Reward (PR) using average decoding probabilities of reference answers for higher quality, debiased reward signals, outperforming naive sequence likelihood. Implements an standard deviation filtering mechanism that dynamically filters prompts to stabilize training and significantly boost final performance. 🚀 Strong Performance in General & Mathematical Reasoning: Demonstrates substantial reasoning improvements across diverse benchmarks, surpassing the RLVR baseline for 1.4 average points across seven benchmarks. Model Description - Trained from model: Gemma2-2B-it - Trained on data: RLPR-Train-Dataset If you find our model/code/paper helpful, please consider citing our papers 📝:
MiniCPM4-8B-GGUF
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. - MiniCPM4-8B-GGUF: GGUF vesion of MiniCPM4-8B. ( user\n请写一篇关于人工智能的文章,详细介绍人工智能的未来发展和隐患。 \n assistant\n" bibtex @article{minicpm4, title={{MiniCPM4}: Ultra-Efficient LLMs on End Devices}, author={MiniCPM Team}, year={2025} } ```
MiniCPM4-MCP
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. ( bibtex @article{minicpm4, title={{MiniCPM4}: Ultra-Efficient LLMs on End Devices}, author={MiniCPM Team}, year={2025} } ```
Eurus-7b-kto
MiniCPM4-8B-Eagle-FRSpec
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. (<-- you are here) - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction MiniCPM4-8B-Eagle-FRSpec is a Eagle model trained with MiniCPM4-8B. It clould be apply on our inference framework cpm.cu with FRSpec, accelerating the generation speed by 7 times compared to Qwen3-8B. Tested on two representative edge devices, the Jetson AGX Orin and RTX 4090, MiniCPM4 with MiniCPM4-8B-Eagle-FRSpec demonstrates significantly superior processing speed over models of comparable size for long-text processing tasks. Its performance advantage becomes increasingly pronounced as the text length increases. On the Jetson AGX Orin platform, MiniCPM4 achieves approximately a 7x improvement in generation speed compared to Qwen3-8B. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
RLAIF-V-7B
Eurus-70b-nca
MiniCPM4.1-8B-MLX
MiniCPM4.1-8B-AutoAWQ
What's New - [2025.09.05] MiniCPM4.1 series are released! This series is a hybrid reasoning model, which can be used in both deep reasoning mode and non-reasoning mode. 🔥🔥🔥 - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 and MiniCPM4.1 Series MiniCPM4 and MiniCPM4.1 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4.1-8B: The latest version of MiniCPM4, with 8B parameters, support fusion thinking. - MiniCPM4.1-8B-GPTQ: MiniCPM4.1-8B in GPTQ format. - MiniCPM4.1-8B-AutoAWQ: MiniCPM4.1-8B in AutoAWQ format. ( Click to expand all MiniCPM4 series models - MiniCPM4-8B: The flagship model with 8B parameters, trained on 8T tokens - MiniCPM4-0.5B: Lightweight version with 0.5B parameters, trained on 1T tokens - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head with QAT for FRSpec, integrating speculation and quantization for ultra acceleration - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format for speculative inference - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format - BitCPM4-0.5B: Extreme ternary quantization of MiniCPM4-0.5B, achieving 90% bit width reduction - BitCPM4-1B: Extreme ternary quantization of MiniCPM3-1B, achieving 90% bit width reduction - MiniCPM4-Survey: Generates trustworthy, long-form survey papers from user queries - MiniCPM4-MCP: Integrates MCP tools to autonomously satisfy user requirements Introduction MiniCPM4 and MiniCPM4.1 are extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities
RLPR-Qwen2.5-7B-Base
RLPR-Qwen2.5-7B-Base is trained from Qwen2.5-7B-Base with the RLPR framework, which eliminates reliance on external verifiers and is simple and generalizable for more domains. 💡 Verifier-Free Reasoning Enhancement: RLPR pioneers reinforcement learning for reasoning tasks by leveraging the LLM's intrinsic generation probability as a direct reward signal. This eliminates the need for external verifiers and specialized fine-tuning, offering broad applicability and effectively handling complex, diverse answers. 🛠️ Innovative Reward & Training Framework: Features a robust Probability-based Reward (PR) using average decoding probabilities of reference answers for higher quality, debiased reward signals, outperforming naive sequence likelihood. Implements an standard deviation filtering mechanism that dynamically filters prompts to stabilize training and significantly boost final performance. 🚀 Strong Performance in General & Mathematical Reasoning: Demonstrates substantial reasoning improvements across diverse benchmarks (e.g., 56.0 on MMLU-Pro, 55.4 on TheoremQA with Qwen2.5-7B). RLPR surpasses strong models reliant on external verifiers (like General Reasoner-7B). Model Description - Trained from model: Qwen2.5-7B - Trained on data: RLPR-Train If you find our model/code/paper helpful, please consider citing our papers 📝:
UltraCM-13b
Eurus-70b-sft
RLPR-Llama3.1-8B-Inst
RLPR-Llama3.1-8B-Inst is trained from Llama3.1-8B-Inst with the RLPR framework, which eliminates reliance on external verifiers and is simple and generalizable for more domains. 💡 Verifier-Free Reasoning Enhancement: RLPR pioneers reinforcement learning for reasoning tasks by leveraging the LLM's intrinsic generation probability as a direct reward signal. This eliminates the need for external verifiers and specialized fine-tuning, offering broad applicability and effectively handling complex, diverse answers. 🛠️ Innovative Reward & Training Framework: Features a robust Probability-based Reward (PR) using average decoding probabilities of reference answers for higher quality, debiased reward signals, outperforming naive sequence likelihood. Implements an standard deviation filtering mechanism that dynamically filters prompts to stabilize training and significantly boost final performance. 🚀 Strong Performance in General & Mathematical Reasoning: Demonstrates substantial reasoning improvements across diverse benchmarks, surpassing the RLVR baseline for 1.4 average points across seven benchmarks. Model Description - Trained from model: Llama-3.1-8B-Instruct - Trained on data: RLPR-Train-Dataset If you find our model/code/paper helpful, please consider citing our papers 📝:
Eurux-8x22b-nca
MiniCPM-2B-history
MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. (<-- you are here) - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction MiniCPM4-8B-Eagle-FRSpec-QAT is a quantization-friendly Eagle model trained with MiniCPM4-8B in QAT. It clould be apply on our inference framework cpm.cu with FRSpec, accelerating the generation speed by 7 times compared to Qwen3-8B. Tested on two representative edge devices, the Jetson AGX Orin and RTX 4090, MiniCPM4 with MiniCPM4-8B-Eagle-FRSpec-QAT demonstrates significantly superior processing speed over models of comparable size for long-text processing tasks. Its performance advantage becomes increasingly pronounced as the text length increases. On the Jetson AGX Orin platform, MiniCPM4 achieves approximately a 7x improvement in generation speed compared to Qwen3-8B. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
MiniCPM-2B-sft-fp32-llama-format
MiniCPM4-0.5B-QAT-Int4-GPTQ-format
MiniCPM-2B-dpo-fp32
MiniCPM-2B-sft-bf16-llama-format
MiniCPM4-8B-marlin-Eagle-vLLM
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. (<-- you are here) - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. (<-- you are here) - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - FRSpec -- Lightweight Speculative Sampling: Achieves draft model acceleration through vocabulary pruning of draft model - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities Using Quantized Eagle Speculative Decoding with vLLM For now, you need to install the latest version of vLLM. Then you can use Quantized Eagle Speculative Decoding to inference MiniCPM4-8B with vLLM. Use `speculativeconfig` to set the draft model. We recommend using CPM.cu for the inference of MiniCPM4. CPM.cu is a CUDA inference framework developed by OpenBMB, which integrates efficient sparse, speculative sampling, and quantization techniques, fully leveraging the efficiency advantages of MiniCPM4. You can install CPM.cu by running the following command: MiniCPM4 natively supports context lengths of up to 32,768 tokens. To reproduce the long-text acceleration effect in the paper, we recommend using the LongRoPE factors that have been validated. Change the `ropescaling` field in the `config.json` file as the following to enable LongRoPE. After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace) For more details about CPM.cu, please refer to the repo CPM.cu. MiniCPM4-8B supports `InfLLM v2`, a sparse attention mechanism designed for efficient long-sequence inference. It requires the infllmv2cudaimpl library. You can install it by running the following command: To enable InfLLM v2, you need to add the `sparseconfig` field in `config.json`: These parameters control the behavior of InfLLM v2: `kernelsize` (default: 32): The size of semantic kernels. `kernelstride` (default: 16): The stride between adjacent kernels. `initblocks` (default: 1): The number of initial blocks that every query token attends to. This ensures attention to the beginning of the sequence. `blocksize` (default: 64): The block size for key-value blocks. `windowsize` (default: 2048): The size of the local sliding window. `topk` (default: 64): The specifies that each token computes attention with only the top-k most relevant key-value blocks. `usenope` (default: false): Whether to use the NOPE technique in block selection for improved performance. `denselen` (default: 8192): Since Sparse Attention offers limited benefits for short sequences, the model can use standard (dense) attention for shorter texts. The model will use dense attention for sequences with a token length below `denselen` and switch to sparse attention for sequences exceeding this length. Set this to `-1` to always use sparse attention regardless of sequence length. MiniCPM4 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques for effective handling of long texts. We have validated the model's performance on context lengths of up to 131,072 tokens by modifying the LongRoPE factor. You can apply the LongRoPE factor modification by modifying the model files. Specifically, in the `config.json` file, adjust the `ropescaling` fields. For now, you need to install our forked version of SGLang. You can start the inference server by running the following command: Then you can use the chat interface by running the following command: Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. Comprehensive Evaluation MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories. Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
Eurux-8x22b-kto
MiniCPM4.1-8B-Marlin
MiniCPM4-0.5B-mlx
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find the technical report here.🔥🔥🔥 - [2025.06.09] MiniCPM4-8B-mlx and MiniCPM4-0.5B-mlx are available and you can run MiniCPM4 on your Apple devices! Thanks to pzc163 for providing this converted model version and related usage instructions. MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B-mlx: MiniCPM4-8B in mlx format, which can used for Apple silicon. - MiniCPM4-0.5B-mlx: MiniCPM4-0.5B in mlx format, which can used for Apple silicon. (<-- you are here) - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities Here is a guide on how to run the `MiniCPM4-0.5B-mlx` model from the command line using `mlx-lm`. You can use mlx-lm to interact with the `MiniCPM4-0.5B-mlx` model directly from your command line. This is a powerful tool that allows you to quickly test and use LLMs in the MLX format. Basic Usage Here is a specific example. This command will load the `openbmb/MiniCPM4-0.5B-mlx` model and generate text based on the prompt you provide: "hello, pls tell me which one is the most powerful LLM in the World". MLX-LM Command Line Parameters - `mlxlm.generate`: This is the primary command in the mlx-lm toolkit used for text generation. - `--model openbmb/MiniCPM4-0.5B-mlx`: This parameter specifies the model to be loaded. `openbmb/MiniCPM4-0.5B-mlx` is the model's identifier on the Hugging Face Hub. mlx-lm will automatically download and cache the model from there. - `--prompt "..."`: This parameter is used to provide the initial text that you want the model to respond to or complete. - `--max-tokens`: Sets the maximum number of tokens to generate. For example, `--max-tokens 200` will limit the output to 200 tokens. - `--temp`: Controls the randomness of the output. Higher temperature values (like 0.8) will produce more diverse and creative outputs, while lower values (like 0.2) will make the output more deterministic and focused. The default value is usually 0.6. - `--seed`: Sets a random seed to ensure reproducible results. Notably, MiniCPM4-0.5B should be prompted with `bostoken`. The following command will use a higher temperature value and limit the output length: Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. Comprehensive Evaluation MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories. Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
Eurus-7b-sft
MiniCPM4-0.5B-QAT-Int4-unquantized
MiniCPM-2B-dpo-bf16-llama-format
MiniCPM4-8B-marlin-cpmcu
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. (<-- you are here) - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities We recommend using CPM.cu for the inference of MiniCPM4. CPM.cu is a CUDA inference framework developed by OpenBMB, which integrates efficient sparse, speculative sampling, and quantization techniques, fully leveraging the efficiency advantages of MiniCPM4. You can install CPM.cu by running the following command: MiniCPM4 natively supports context lengths of up to 32,768 tokens. To reproduce the long-text acceleration effect in the paper, we recommend using the LongRoPE factors that have been validated. Change the `ropescaling` field in the `config.json` file as the following to enable LongRoPE. After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace) For more details about CPM.cu, please refer to the repo CPM.cu. Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. Comprehensive Evaluation MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories. Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
RLAIF-V-12B
MiniCPM4-8B-marlin-vLLM
What's New - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 Series MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4-8B: The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. - MiniCPM4-0.5B: The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens. - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B. - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B. - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B. - BitCPM4-0.5B: Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - BitCPM4-1B: Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width. - MiniCPM4-Survey: Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers. - MiniCPM4-MCP: Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements. Introduction MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - FRSpec -- Lightweight Speculative Sampling: Achieves draft model acceleration through vocabulary pruning of draft model - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities Using Quantized Eagle Speculative Decoding with vLLM For now, you need to install the latest version of vLLM. Then you can use Quantized Eagle Speculative Decoding to inference MiniCPM4-8B with vLLM. Use `speculativeconfig` to set the draft model. Inference Quantized MiniCPM4-8B with vLLM For now, you need to install the latest version of vLLM. Then you can inference Quantized MiniCPM4-8B with vLLM. We recommend using CPM.cu for the inference of MiniCPM4. CPM.cu is a CUDA inference framework developed by OpenBMB, which integrates efficient sparse, speculative sampling, and quantization techniques, fully leveraging the efficiency advantages of MiniCPM4. You can install CPM.cu by running the following command: MiniCPM4 natively supports context lengths of up to 32,768 tokens. To reproduce the long-text acceleration effect in the paper, we recommend using the LongRoPE factors that have been validated. Change the `ropescaling` field in the `config.json` file as the following to enable LongRoPE. After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace) For more details about CPM.cu, please refer to the repo CPM.cu. MiniCPM4-8B supports `InfLLM v2`, a sparse attention mechanism designed for efficient long-sequence inference. It requires the infllmv2cudaimpl library. You can install it by running the following command: To enable InfLLM v2, you need to add the `sparseconfig` field in `config.json`: These parameters control the behavior of InfLLM v2: `kernelsize` (default: 32): The size of semantic kernels. `kernelstride` (default: 16): The stride between adjacent kernels. `initblocks` (default: 1): The number of initial blocks that every query token attends to. This ensures attention to the beginning of the sequence. `blocksize` (default: 64): The block size for key-value blocks. `windowsize` (default: 2048): The size of the local sliding window. `topk` (default: 64): The specifies that each token computes attention with only the top-k most relevant key-value blocks. `usenope` (default: false): Whether to use the NOPE technique in block selection for improved performance. `denselen` (default: 8192): Since Sparse Attention offers limited benefits for short sequences, the model can use standard (dense) attention for shorter texts. The model will use dense attention for sequences with a token length below `denselen` and switch to sparse attention for sequences exceeding this length. Set this to `-1` to always use sparse attention regardless of sequence length. MiniCPM4 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques for effective handling of long texts. We have validated the model's performance on context lengths of up to 131,072 tokens by modifying the LongRoPE factor. You can apply the LongRoPE factor modification by modifying the model files. Specifically, in the `config.json` file, adjust the `ropescaling` fields. For now, you need to install our forked version of SGLang. You can start the inference server by running the following command: Then you can use the chat interface by running the following command: Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. Comprehensive Evaluation MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories. Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. LICENSE - This repository and MiniCPM models are released under the Apache-2.0 License. Citation - Please cite our paper if you find our work valuable.
NOSA-3B
MiniCPM-2B-dpo-int4
MiniCPM-2B-sft-int4
VisCPM-Chat
MiniCPM4-Survey
MiniCPM3-4B-GPTQ-Int4
MiniCPM4-0.5B-QAT-Int4-GGUF
MiniCPM4-8B-mlx
Eurus-RM-7b
Ultra-FineWeb-classifier
NOSA-1B
RLHF-V
RLHF-V-SFT
MiniCPM-S-1B-sft-llama-format
- Original model: MiniCPM-1B-sft-bf16 - Model creator and fine-tuned by: ModelBest, OpenBMB, and THUNLP - Paper: link (Note: `MiniCPM-S-1B` is denoted as `ProSparse-1B` in the paper.) - Adapted PowerInfer version: MiniCPM-S-1B-sft-gguf This model is converted from MiniCPM-S-1B-sft as a LLaMA format to make its usage more convenient. To make the model sophisticatedly respond to a query, it is recommended to use a standard chat prompt, such as: where `prompt` is the query text, while ` ` and ` ` are prompt tokens. Also, make sure that you have a bos token ` ` at the beginning of any input, or the model can sometimes behave improperly. The utilization of activation sparsity, namely the existence of considerable weakly-contributed elements among activation outputs, is a promising method for inference acceleration of large language models (LLMs) (Liu et al., 2023; Song et al., 2023). Concretely, acceleration methods based on activation sparsity usually achieve higher inference speed by making wiser resource allocation and computation policies to avoid resource waste on these weakly-contributed parameters. Adopting ReLU as the activation function is a straightforward method to achieve activation sparsity. However, most recent mainstream LLMs adopt activation functions without intrinsic sparsity (e.g., GELU and Swish). Some efforts (Zhang et al., 2022; Mirzadeh et al., 2023; Zhang et al., 2024) introduce ReLU or its variants as the substitutive activation function to help non-ReLU LLMs achieve activation sparsity and inference acceleration, but few can concurrently obtain high sparsity and comparable task-specific performance. In this work, we introduce a simple and effective sparsification method named "ProSparse" to push LLMs for higher activation sparsity while maintaining comparable performance. By applying ProSparse to Swish-activated LLaMA2-7B, LLaMA2-13B, and MiniCPM-1B, we obtain ReLU-activated models with high sparsity of 89.32%, 88.80%, and 87.89%, respectively, while their performance is comparable to the original version. These present the most sparsely activated models among open-source LLaMA versions and competitive end-size models, considerably surpassing ReluLLaMA-7B (66.98%) and ReluLLaMA-13B (71.56%). Further inference acceleration experiments demonstrate the practical speedup effects of higher sparsity on both PowerInfer and our two sparse GPU operators. We train the 1B model on about 473.02 billion tokens within 101,000 steps. These consist of 35,000 steps for standard ProSparse pre-training, 60,000 steps for decay, and 6,000 steps for SFT. Except for ProSparse, other training settings are highly consistent with the original MiniCPM-1B. Refer to our paper and MiniCPM technical report for more details. Intuitively, training the model with even more tokens or with data of a wider coverage and higher quality will obtain better task-specific performance. The training process of ProSparse consists of three steps (refer to Section 3.2 of paper for more details): 1. Activation Function Substitution: We substitute the activation function of FFNs with ReLU and apply continual training; 2. Progressive Sparsity Regularization: We jointly optimize the model on the conventional next-token prediction loss and \\(L1\\) regularization loss. The regularization is applied to the sparse intermediate outputs of FFNs with a regularization factor increasing progressively in multiple stages. Specifically, the regularization factor \\(\lambda\\) is set to a small constant for the warmup stage, and then increases along a smooth sine curve for each of the subsequent incremental stages. Each stage is accompanied by certain steps of training. In this way, the model can have more time to adapt to the increasing regularization without radical activation shifts, thus alleviating performance degradation. 3. Activation Threshold Shifting: We finally replace ReLU with FATReLU (Kurtz et al., 2020), a ReLU variant with a positive threshold. This can prune those non-zero weakly-contributed elements in activation outputs and further boost sparsity. The hyper-parameters for each stage (including the regularization factor \\(\lambdai\\), the accumulated training steps \\(Ti\\), and the accumulated training tokens) are shown as follows: | Step Number \\(i\\) | \\(\lambdai\\) | \\(Ti\\) | Accumulated Tokens (B) | | :-------------: | :---------: | :----: | :--------------------: | | 0 | 0 | 10,000 | 49.15 | | 1 | \\(1e-3\\) | 15,000 | 73.73 | | 2 | \\(5e-3\\) | 20,000 | 98.30 | | 3 | \\(5e-3\\) | 25,000 | 122.88 | | 4 | \\(5e-2\\) | 35,000 | 172.03 | | decay | \\(5e-2\\) (fixed) | 95,000 | 466.94 | | SFT | \\(1e-2\\) (fixed) | 101,000 | 473.02 | The evaluation results on the above benchmarks demonstrate the advantage of ProSparse, which is the only method achieving high sparsity and comparable performance to the original Swish-activated LLaMA2. Note that models under all settings are trained with the same number of tokens on the same mixed dataset. Our evaluation is based on the framework UltraEval. The evaluation details are listed as follows: - Code Generation: We compute the average pass@1 scores on HumanEval (0-shot) and MBPP (3-shot). - Commonsense Reasoning: We report the average 0-shot accuracies on PIQA, SIQA, HellaSwag, WinoGrande, and COPA. - Reading Comprehension: We compute the average 0-shot accuracies on BoolQ, LAMBADA, and TyDi QA. - Other Popular Benchmarks: We report the average accuracies on GSM8K (8-shot), MMLU (5-shot), Big Bench Hard (BBH) (3-shot), and AGI-Eval (0-shot). Notes: For PIQA, SIQA, HellaSwag, WinoGrande, COPA, BoolQ, LAMBADA, TyDi QA, and AGI-Eval, we obtain the predicted answers based on maximized perplexity. For GSM8K, MMLU, and BBH, the predicted answers are directly generated. | Setting | Average Sparsity | Average Performance | Code Generation | Commonsense Reasoning | Reading Comprehension | GSM8K | MMLU | BBH | AGI Eval | | :-------------------: | :----------------: | :----------------------: | :----------------------: | :---: | :---: | :---: | :---------: | :-----: | :-----------------: | | LLaMA2-7B | - | 37.96 | 16.37 | 69.59 | 61.87 | 12.96 | 44.45 | 32.96 | 27.53 | | ReluLLaMA-7B | 66.98 | 37.62 | 15.85 | 69.64 | 70.54 | 5.84 | 38.64 | 35.07 | 27.73 | | ProSparse-7B\ | 88.11 | 38.31 | 19.47 | 66.29 | 63.33 | 12.74 | 45.21 | 33.59 | 27.55 | | ProSparse-7B | 89.32 | 38.46 | 19.42 | 66.27 | 63.50 | 12.13 | 45.48 | 34.99 | 27.46 | | LLaMA2-13B | - | 44.06 | 20.19 | 72.58 | 71.55 | 22.21 | 54.69 | 37.89 | 29.33 | | ReluLLaMA-13B | 71.56 | 42.74 | 20.19 | 70.44 | 73.29 | 18.50 | 50.58 | 37.97 | 28.22 | | ProSparse-13B\ | 87.97 | 45.07 | 29.03 | 69.75 | 67.54 | 25.40 | 54.78 | 40.20 | 28.76 | | ProSparse-13B | 88.80 | 44.90 | 28.42 | 69.76 | 66.91 | 26.31 | 54.35 | 39.90 | 28.67 | | MiniCPM-1B | - | 44.44 | 36.85 | 63.67 | 60.90 | 35.48 | 50.44 | 35.03 | 28.71 | | MiniCPM-S-1B\ | 86.25 | 44.72 | 41.38 | 64.55 | 60.69 | 34.72 | 49.36 | 34.04 | 28.27 | | MiniCPM-S-1B | 87.89 | 44.72 | 42.04 | 64.37 | 60.73 | 34.57 | 49.51 | 34.08 | 27.77 | Notes: "Original" refers to the original Swish-activated LLaMA2 versions. ReluLLaMA-7B and ReluLLaMA-13B are available at 7B and 13B respectively. MiniCPM-1B is available at 1B. "ProSparse-7B\", "ProSparse-13B\", and "MiniCPM-S-1B\" denote the ProSparse versions without activation threshold shifting. The above results can be replicated with UltraEval. Some abnormal results obtained with other popular frameworks such as LM-Eval are probably attributed to the absence of the cls token ` `, which is not added by default in LM-Eval. A quick temporary fix is shown in the following codes. Other differences in evaluation results may be caused by other reasons, including the few-shot settings, data pre-processing, and extra prompts. bibtex @article{song2024prosparse, title={{ProSparse}: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models}, author={Song, Chenyang and Han, Xu and Zhang, Zhengyan and Hu, Shengding and Shi, Xiyu and Li, Kuai and Chen, Chen and Liu, Zhiyuan and Li, Guangli and Yang, Tao and Sun, Maosong}, year={2024}, journal={arXiv preprint arXiv:2402.13516}, url={https://arxiv.org/pdf/2402.13516.pdf} } ``` This repository is released under the Apache-2.0 License. The usage of MiniCPM model weights must strictly follow the General Model License (GML). The models and weights of MiniCPM are completely free for academic research. If you intend to utilize the model for commercial purposes, please reach out to [email protected] to obtain the certificate of authorization. As a language model, MiniCPM generates content by learning from a vast amount of text. However, it does not possess the ability to comprehend or express personal opinions or value judgments. Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. The model card is modified from ReluLLaMA-7B and MiniCPM-1B.
minicpm-dpo-bf16-ggml-model-q4_0
VisCPM-Paint
MiniCPM3-RAG-LoRA
MiniCPM-S-1B-sft-gguf
AgentCPM-Explore-GGUF
DensingLaw-ScalingModels
InfLLM-V2-Short-Dense-Base
EVisRAG-7B
MiniCPM4.1-8B-Eagle3
What's New - [2025.09.05] MiniCPM4.1 series are released! This series is a hybrid reasoning model, which can be used in both deep reasoning mode and non-reasoning mode. 🔥🔥🔥 - [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.🔥🔥🔥 MiniCPM4 and MiniCPM4.1 Series MiniCPM4 and MiniCPM4.1 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - MiniCPM4.1-8B: The latest version of MiniCPM4, with 8B parameters, support fusion thinking. - MiniCPM4.1-8B-GPTQ: MiniCPM4.1-8B in GPTQ format. - MiniCPM4.1-8B-AutoAWQ: MiniCPM4.1-8B in AutoAWQ format. - MiniCPM-4.1-8B-Marlin: MiniCPM4.1-8B in Marlin format. - MiniCPM4.1-8B-GGUF: MiniCPM4.1-8B in GGUF format. - MiniCPM4.1-8B-MLX: MiniCPM4.1-8B in MLX format. - MiniCPM4.1-8B-Eagle3: Eagle3 model for MiniCPM4.1-8B. ( Click to expand all MiniCPM4 series models - MiniCPM4-8B: The flagship model with 8B parameters, trained on 8T tokens - MiniCPM4-0.5B: Lightweight version with 0.5B parameters, trained on 1T tokens - MiniCPM4-8B-Eagle-FRSpec: Eagle head for FRSpec, accelerating speculative inference - MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu: Eagle head with QAT for FRSpec, integrating speculation and quantization for ultra acceleration - MiniCPM4-8B-Eagle-vLLM: Eagle head in vLLM format for speculative inference - MiniCPM4-8B-marlin-Eagle-vLLM: Quantized Eagle head for vLLM format - BitCPM4-0.5B: Extreme ternary quantization of MiniCPM4-0.5B, achieving 90% bit width reduction - BitCPM4-1B: Extreme ternary quantization of MiniCPM3-1B, achieving 90% bit width reduction - MiniCPM4-Survey: Generates trustworthy, long-form survey papers from user queries - MiniCPM4-MCP: Integrates MCP tools to autonomously satisfy user requirements Introduction MiniCPM4 and MiniCPM4.1 are extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ Efficient Model Architecture: - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 Efficient Learning Algorithms: - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 High-Quality Training Data: - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ Efficient Inference System: - CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities