internlm
Intern-S1-Pro
internlm2-1_8b-reward
Intern-S1
internlm2-chat-7b
internlm2_5-7b-chat
💻Github Repo • 🤔Reporting Issues • 📜Technical Report InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics: - Outstanding reasoning capability: State-of-the-art performance on Math reasoning, surpassing models like Llama3 and Gemma2-9B. - 1M Context window: Nearly perfect at finding needles in the haystack with 1M-long context, with leading performance on long-context tasks like LongBench. Try it with LMDeploy for 1M-context inference. - Stronger tool use: InternLM2.5 supports gathering information from more than 100 web pages, corresponding implementation has be released in MindSearch. InternLM2.5 has better tool utilization-related capabilities in instruction following, tool selection and reflection. See examples. We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool OpenCompass. The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the OpenCompass leaderboard for more evaluation results. | Benchmark | InternLM2.5-7B-Chat | Llama3-8B-Instruct | Gemma2-9B-IT | Yi-1.5-9B-Chat | GLM-4-9B-Chat | Qwen2-7B-Instruct | | ------------------ | ------------------- | ------------------ | ------------ | -------------- | ------------- | ----------------- | | MMLU (5-shot) | 72.8 | 68.4 | 70.9 | 71.0 | 71.4 | 70.8 | | CMMLU (5-shot) | 78.0 | 53.3 | 60.3 | 74.5 | 74.5 | 80.9 | | BBH (3-shot CoT) | 71.6 | 54.4 | 68.2\ | 69.6 | 69.6 | 65.0 | | MATH (0-shot CoT) | 60.1 | 27.9 | 46.9 | 51.1 | 51.1 | 48.6 | | GSM8K (0-shot CoT) | 86.0 | 72.9 | 88.9 | 80.1 | 85.3 | 82.9 | | GPQA (0-shot) | 38.4 | 26.1 | 33.8 | 37.9 | 36.9 | 38.4 | - The evaluation results were obtained from OpenCompass (some data marked with , which means come from the original papers), and evaluation configuration can be found in the configuration files provided by OpenCompass. - The evaluation data may have numerical differences due to the version iteration of OpenCompass, so please refer to the latest evaluation results of OpenCompass. Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. To load the InternLM2.5 7B Chat model using Transformers, use the following code: internlm/internlm25-7b-chat-gguf offers `internlm25-7b-chat` models in GGUF format in both half precision and various low-bit quantized versions, including `q50`, `q5km`, `q6k`, and `q80`. LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. You can run batch inference locally with the following python code: Or you can launch an OpenAI compatible server with the following command: Launch OpenAI compatible server with `vLLM>=0.3.2`: The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected]. InternLM2.5 ,即书生·浦语大模型第 2.5 代,开源了面向实用场景的70亿参数基础模型与对话模型 (InternLM2.5-7B-Chat)。模型具有以下特点: - 卓越的推理性能:在数学推理方面取得了同量级模型最优精度,超越了 Llama3 和 Gemma2-9B。 - 有效支持百万字超长上下文:模型在 1 百万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 等长文任务中的表现也达到开源模型中的领先水平。 可以通过 LMDeploy 尝试百万字超长上下文推理。 - 工具调用能力整体升级:InternLM2.5 支持从上百个网页搜集有效信息进行分析推理,相关实现已开源到 MindSearch。InternLM2.5 具有更强和更具有泛化性的指令理解、工具筛选与结果反思等能力,新版模型可以更可靠地支持复杂智能体的搭建,支持对工具进行有效的多轮调用,完成较复杂的任务。可以查看更多样例。 我们使用开源评测工具 OpenCompass 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问 OpenCompass 榜单 获取更多的评测结果。 | 评测集\模型 | InternLM2.5-7B-Chat | Llama3-8B-Instruct | Gemma2-9B-IT | Yi-1.5-9B-Chat | GLM-4-9B-Chat | Qwen2-7B-Instruct | | ------------------ | ------------------- | ------------------ | ------------ | -------------- | ------------- | ----------------- | | MMLU (5-shot) | 72.8 | 68.4 | 70.9 | 71.0 | 71.4 | 70.8 | | CMMLU (5-shot) | 78.0 | 53.3 | 60.3 | 74.5 | 74.5 | 80.9 | | BBH (3-shot CoT) | 71.6 | 54.4 | 68.2\ | 69.6 | 69.6 | 65.0 | | MATH (0-shot CoT) | 60.1 | 27.9 | 46.9 | 51.1 | 51.1 | 48.6 | | GSM8K (0-shot CoT) | 86.0 | 72.9 | 88.9 | 80.1 | 85.3 | 82.9 | | GPQA (0-shot) | 38.4 | 26.1 | 33.8 | 37.9 | 36.9 | 38.4 | - 以上评测结果基于 OpenCompass 获得(部分数据标注``代表数据来自原始论文),具体测试细节可参见 OpenCompass 中提供的配置文件。 - 评测数据会因 OpenCompass 的版本迭代而存在数值差异,请以 OpenCompass 最新版的评测结果为主。 局限性: 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权(申请表)。其他问题与合作请联系 [email protected]。
internlm2-7b
💻Github Repo • 🤔Reporting Issues • 📜Technical Report Introduction The second generation of the InternLM model, InternLM2, includes models at two scales: 7B and 20B. For the convenience of users and researchers, we have open-sourced four versions of each scale of the model, which are: - internlm2-base: A high-quality and highly adaptable model base, serving as an excellent starting point for deep domain adaptation. - internlm2 (recommended): Built upon the internlm2-base, this version has further pretrained on domain-specific corpus. It shows outstanding performance in evaluations while maintaining robust general language abilities, making it our recommended choice for most applications. - internlm2-chat-sft: Based on the Base model, it undergoes supervised human alignment training. - internlm2-chat (recommended): Optimized for conversational interaction on top of the internlm2-chat-sft through RLHF, it excels in instruction adherence, empathetic chatting, and tool invocation. The base model of InternLM2 has the following technical features: - Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval. - Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding. We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool OpenCompass. Some of the evaluation results are shown in the table below. You are welcome to visit the OpenCompass Leaderboard for more evaluation results. | Dataset\Models | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - The evaluation results were obtained from OpenCompass , and evaluation configuration can be found in the configuration files provided by OpenCompass. - The evaluation data may have numerical differences due to the version iteration of OpenCompass, so please refer to the latest evaluation results of OpenCompass. Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. Import from Transformers To load the InternLM2-7B model using Transformers, use the following code: The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact . 简介 第二代浦语模型, InternLM2 包含 7B 和 20B 两个量级的模型。为了方便用户使用和研究,每个量级的模型我们总共开源了四个版本的模型,他们分别是 - internlm2-base: 高质量和具有很强可塑性的模型基座,是模型进行深度领域适配的高质量起点; - internlm2(推荐): 在internlm2-base基础上,进一步在特定领域的语料上进行预训练,在评测中成绩优异,同时保持了很好的通用语言能力,是我们推荐的在大部分应用中考虑选用的优秀基座; - internlm2-chat-sft:在Base基础上,进行有监督的人类对齐训练; - internlm2-chat(推荐):在internlm2-chat-sft基础上,经过RLHF,面向对话交互进行了优化,具有很好的指令遵循、共情聊天和调用工具等的能力。 - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。 我们使用开源评测工具 OpenCompass 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问 OpenCompass 榜单 获取更多的评测结果。 | 评测集 | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - 以上评测结果基于 OpenCompass 获得(部分数据标注``代表数据来自原始论文),具体测试细节可参见 OpenCompass 中提供的配置文件。 - 评测数据会因 OpenCompass 的版本迭代而存在数值差异,请以 OpenCompass 最新版的评测结果为主。 局限性: 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权(申请表)。其他问题与合作请联系 。
internlm2-20b
internlm2-base-7b
internlm2-chat-20b
internlm-chat-7b
internlm2-base-20b
internlm3-8b-instruct
Intern-S1-FP8
internlm-xcomposer2d5-clip
internlm2-1_8b
💻Github Repo • 🤔Reporting Issues • 📜Technical Report Introduction InternLM2-1.8B is the 1.8 billion parameter version of the second generation InternLM series. In order to facilitate user use and research, InternLM2-1.8B has three versions of open-source models. They are: - InternLM2-1.8B: Foundation models with high quality and high adaptation flexibility, which serve as a good starting point for downstream deep adaptations. - InternLM2-Chat-1.8B-SFT: Chat model after supervised fine-tuning (SFT) on InternLM2-1.8B. - InternLM2-Chat-1.8B: Further aligned on top of InternLM2-Chat-1.8B-SFT through online RLHF. InternLM2-Chat-1.8B exhibits better instruction following, chat experience, and function calling, which is recommended for downstream applications. The InternLM2 has the following technical features: - Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval. - Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding. We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool OpenCompass. Some of the evaluation results are shown in the table below. You are welcome to visit the OpenCompass Leaderboard for more evaluation results. | Dataset\Models | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B | | :---: | :---: | :---: | :---: | :---: | | MMLU | 46.9 | 47.1 | 65.8 | 63.7 | | AGIEval | 33.4 | 38.8 | 49.9 | 47.2 | | BBH | 37.5 | 35.2 | 65.0 | 61.2 | | GSM8K | 31.2 | 39.7 | 70.8 | 70.7 | | MATH | 5.6 | 11.8 | 20.2 | 23.0 | | HumanEval | 25.0 | 32.9 | 43.3 | 59.8 | | MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 | - The evaluation results were obtained from OpenCompass , and evaluation configuration can be found in the configuration files provided by OpenCompass. - The evaluation data may have numerical differences due to the version iteration of OpenCompass, so please refer to the latest evaluation results of OpenCompass. Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. Import from Transformers To load the InternLM2-1.8B model using Transformers, use the following code: The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact . 简介 书生·浦语-1.8B (InternLM2-1.8B) 是第二代浦语模型系列的18亿参数版本。为了方便用户使用和研究,书生·浦语-1.8B (InternLM2-1.8B) 共有三个版本的开源模型,他们分别是: - InternLM2-1.8B: 具有高质量和高适应灵活性的基础模型,为下游深度适应提供了良好的起点。 - InternLM2-Chat-1.8B-SFT:在 InternLM2-1.8B 上进行监督微调 (SFT) 后得到的对话模型。 - InternLM2-chat-1.8B:通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B 表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。 - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。 我们使用开源评测工具 OpenCompass 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问 OpenCompass 榜单 获取更多的评测结果。 | 评测集 | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B | | :---: | :---: | :---: | :---: | :---: | | MMLU | 46.9 | 47.1 | 65.8 | 63.7 | | AGIEval | 33.4 | 38.8 | 49.9 | 47.2 | | BBH | 37.5 | 35.2 | 65.0 | 61.2 | | GSM8K | 31.2 | 39.7 | 70.8 | 70.7 | | MATH | 5.6 | 11.8 | 20.2 | 23.0 | | HumanEval | 25.0 | 32.9 | 43.3 | 59.8 | | MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 | - 以上评测结果基于 OpenCompass 获得(部分数据标注``代表数据来自原始论文),具体测试细节可参见 OpenCompass 中提供的配置文件。 - 评测数据会因 OpenCompass 的版本迭代而存在数值差异,请以 OpenCompass 最新版的评测结果为主。 局限性: 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 通过 Transformers 加载 通过以下的代码加载 InternLM2-1.8B 模型进行文本续写 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权(申请表)。其他问题与合作请联系 。
internlm-xcomposer2d5-7b
InternLM-XComposer2.5 excels in various text-image comprehension and composition applications, achieving GPT-4V level capabilities with merely 7B LLM backend. IXC2.5 is trained with 24K interleaved image-text contexts, it can seamlessly extend to 96K long contexts via RoPE extrapolation. This long-context capability allows IXC-2.5 to excel in tasks requiring extensive input and output contexts. Import from Transformers To load the InternLM-XComposer2-4KHD model using Transformers, use the following code: We provide a simple example to show how to use InternLM-XComposer2.5 with 🤗 Transformers. Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
internlm2-chat-1_8b
💻Github Repo • 🤔Reporting Issues • 📜Technical Report Introduction InternLM2-1.8B is the 1.8 billion parameter version of the second generation InternLM series. In order to facilitate user use and research, InternLM2-1.8B has three versions of open-source models. They are: - InternLM2-1.8B: Foundation models with high quality and high adaptation flexibility, which serve as a good starting point for downstream deep adaptations. - InternLM2-Chat-1.8B-SFT: Chat model after supervised fine-tuning (SFT) on InternLM2-1.8B. - InternLM2-Chat-1.8B: Further aligned on top of InternLM2-Chat-1.8B-SFT through online RLHF. InternLM2-Chat-1.8B exhibits better instruction following, chat experience, and function calling, which is recommended for downstream applications. The InternLM2 has the following technical features: - Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval. - Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding. We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool OpenCompass. Some of the evaluation results are shown in the table below. You are welcome to visit the OpenCompass Leaderboard for more evaluation results. | Dataset\Models | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B | | :---: | :---: | :---: | :---: | :---: | | MMLU | 46.9 | 47.1 | 65.8 | 63.7 | | AGIEval | 33.4 | 38.8 | 49.9 | 47.2 | | BBH | 37.5 | 35.2 | 65.0 | 61.2 | | GSM8K | 31.2 | 39.7 | 70.8 | 70.7 | | MATH | 5.6 | 11.8 | 20.2 | 23.0 | | HumanEval | 25.0 | 32.9 | 43.3 | 59.8 | | MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 | - The evaluation results were obtained from OpenCompass , and evaluation configuration can be found in the configuration files provided by OpenCompass. - The evaluation data may have numerical differences due to the version iteration of OpenCompass, so please refer to the latest evaluation results of OpenCompass. Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. To load the InternLM2 1.8B Chat model using Transformers, use the following code: LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. You can run batch inference locally with the following python code: Or you can launch an OpenAI compatible server with the following command: Launch OpenAI compatible server with `vLLM>=0.3.2`: The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact . 简介 书生·浦语-1.8B (InternLM2-1.8B) 是第二代浦语模型系列的18亿参数版本。为了方便用户使用和研究,书生·浦语-1.8B (InternLM2-1.8B) 共有三个版本的开源模型,他们分别是: - InternLM2-1.8B: 具有高质量和高适应灵活性的基础模型,为下游深度适应提供了良好的起点。 - InternLM2-Chat-1.8B-SFT:在 InternLM2-1.8B 上进行监督微调 (SFT) 后得到的对话模型。 - InternLM2-Chat-1.8B:通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。 - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。 我们使用开源评测工具 OpenCompass 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问 OpenCompass 榜单 获取更多的评测结果。 | 评测集 | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B | | :---: | :---: | :---: | :---: | :---: | | MMLU | 46.9 | 47.1 | 65.8 | 63.7 | | AGIEval | 33.4 | 38.8 | 49.9 | 47.2 | | BBH | 37.5 | 35.2 | 65.0 | 61.2 | | GSM8K | 31.2 | 39.7 | 70.8 | 70.7 | | MATH | 5.6 | 11.8 | 20.2 | 23.0 | | HumanEval | 25.0 | 32.9 | 43.3 | 59.8 | | MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 | - 以上评测结果基于 OpenCompass 获得(部分数据标注``代表数据来自原始论文),具体测试细节可参见 OpenCompass 中提供的配置文件。 - 评测数据会因 OpenCompass 的版本迭代而存在数值差异,请以 OpenCompass 最新版的评测结果为主。 局限性: 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权(申请表)。其他问题与合作请联系 。
internlm-xcomposer2d5-7b-reward
internlm2-chat-7b-sft
Intern-S1-mini
internlm2_5-7b
internlm-xcomposer2-7b
internlm-20b
internlm-xcomposer2-vl-7b
InternLM-XComposer2 is a vision-language large model (VLLM) based on InternLM2 for advanced text-image comprehension and composition. We release InternLM-XComposer2 series in two versions: - InternLM-XComposer2-VL: The pretrained VLLM model with InternLM2 as the initialization of the LLM, achieving strong performance on various multimodal benchmarks. - InternLM-XComposer2: The finetuned VLLM for Free-from Interleaved Text-Image Composition. Import from Transformers To load the InternLM-XComposer2-VL-7B model using Transformers, use the following code: Quickstart We provide a simple example to show how to use InternLM-XComposer with 🤗 Transformers. Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
internlm2_5-20b-chat-4bit-awq
internlm-xcomposer2-vl-7b-4bit
internlm2-chat-20b-4bits
internlm-xcomposer2-4khd-7b
InternLM-XComposer2-4KHD is a general vision-language large model (VLLM) based on InternLM2, with the capability of 4K resolution image understanding. Import from Transformers To load the InternLM-XComposer2-4KHD model using Transformers, use the following code: Quickstart We provide a simple example to show how to use InternLM-XComposer with 🤗 Transformers. Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
internlm-xcomposer-7b
Intern-S1-mini-FP8
CapRL-3B
📖 Paper | 🏠 Github |🤗 CapRL-3B Model |🤗 CapRL-InternVL3.5-8B Model | 🤗 CapRL-2M Dataset 🤗 CapRL Collection | 🤗 Daily Paper |🤗 CapRL-3B-GGUF |🤗 CapRL-3B-i1-GGUF Now you can try out CapRL-3B with your own images🎨! ➡️ 🌈CapRL Space When selecting between the available CapRL models, it's essential to consider the trade-off between performance and computational cost. This guide will help you choose the most suitable model for your specific needs: |Model|Parameters|Strength| |-|-|-| |🤗CapRL-3B|3B|Speed, Efficiency| |🤗CapRL-InternVL3.5-8B|8B|High Performance, Advanced Captioning Ability| 📢 News We are working on even stronger base models and upgrading our training recipe — stay tuned! - 🔥 [10/15/2025] The total downloads of the CapRL-related models and dataset reached 6,000 within just 20 days! - 🚀 [10/15/2025] We are excited to announce the release of CapRL-InternVL3.5-8B, whose image captioning capability outperforms Qwen2.5-VL-72B! - 🚀 [10/15/2025] Thanks mradermacher for the valuable contribution! CapRL-3B-GGUF is the static quants version, and CapRL-3B-i1-GGUF is weighted/imatrix quants version. - 🚀 [10/15/2025] We release QA curation code. - 🚀 [09/25/2025] We release CapRL repository, CapRL-3B model, evaluation code and dataset. Introduction We are excited to introduce CapRL-3B, a lightweight 3B image captioner that achieves perception capabilities comparable to Qwen2.5-VL-72B. This is the first study of applying Reinforcement Learning with Verifiable Rewards for the open-ended and subjective image captioning task. Unlike traditional Supervised Fine-Tuning, which can lead to models memorizing a limited set of annotated captions, our method allows the model to explore and generate a broader range of creative and general descriptions. CapRL is a new training paradigm featuring a decoupled two-stage pipeline. The initial stage uses LVLMs to generate rich and accurate captions. Subsequently, the second stage evaluates caption quality by using a vision-only LLM to perform the QA task. We also created a specific QA curation pipeline to ensure the quality of the questions and answers used for the second stage. By employing the CapRL training framework, initializing with the Qwen2.5-VL-3B model, and using a carefully filtered 75K QA dataset as the training set, we obtained a highly capable captioner, CapRL-3B. Key Features Remarkable visual understanding for Chart, Infographics and Document: CapRL-3B achieves perception accuracy and visual information coverage comparable to Qwen2.5-VL-72B. Well-organized output: The outputs of CapRL-3B are relatively well-structured, making them clear and easy to understand. Detailed description for natural images: The outputs of CapRL-3B can perfectly cover all valid visual information while containing fewer hallucinations. Usage If you want to use CapRL-3B for captioning, you can directly follow the exact same inference approach as in Qwen2.5-VL-series. Run the command below to start an OpenAI-compatible API service: Then you can use the chat API as below: (see OpenAI API protocol document for more details):
JanusCoder-14B
💻Github Repo • 🤗Model Collections • 📜Technical Report We introduce JanusCoder and JanusCoderV, a suite of open-source foundational models designed to establish a unified visual-programmatic interface for code intelligence. This model suite is built upon open-source language models (such as Qwen3-8B and 14B) and multimodal models (such as Qwen2.5-VL and InternVL3.5-8B). The JanusCoder series is trained on JANUSCODE-800K—the largest multimodal code corpus to date, generated by an innovative synthesis toolkit, covering everything from standard charts to complex interactive Web UIs and code-driven animations. This enables the models to uniformly handle diverse visual-programmatic tasks, such as generating code from textual instructions, visual inputs, or a combination of both, rather than building specialized models for isolated tasks. JanusCoder excels at flexible content generation (like data visualizations and interactive front-ends) as well as precise, program-driven editing of visual effects and complex animation construction. | Model Name | Description | Download | | --- | --- | --- | | JanusCoder-8B | 8B text model based on Qwen3-8B. | 🤗 Model | | 👉 JanusCoder-14B | 14B text model based on Qwen3-14B. | 🤗 Model | | JanusCoderV-7B | 7B multimodal model based on Qwen2.5-VL-7B. | 🤗 Model | | JanusCoderV-8B | 8B multimodal model based on InternVL3.5-8B. | 🤗 Model | We evaluate the JanusCoder model on various benchmarks that span code interlligence tasks on multiple PLs: | Model | JanusCoder-14B | Qwen3-14B | Qwen2.5-Coder-32B-Instruct | LLaMA3-8B-Instruct | GPT-4o | | --- | --- | --- | --- | --- | --- | | PandasPlotBench (Task) | 86 | 78 | 82 | 69 | 85 | | ArtifactsBench | 41.1 | 36.5 | 35.5 | 36.5 | 37.9 | | DTVBench (Manim) | 8.41 | 6.63 | 9.61 | 4.92 | 10.60 | | DTVBench (Wolfram) | 5.97 | 5.08 | 4.98 | 3.15 | 5.97 | The following provides demo code illustrating how to generate text using JanusCoder-14B. > Please use transformers >= 4.55.0 to ensure the model works normally. Citation 🫶 If you are interested in our work or find the repository / checkpoints / benchmark / data helpful, please consider using the following citation format when referencing our papers:
internlm-xcomposer2d5-7b-4bit
CapRL-InternVL3.5-8B
internlm2-math-plus-7b
internlm2_5-1_8b-chat
💻Github Repo • 🤔Reporting Issues • 📜Technical Report InternLM2.5 has open-sourced a 1.8 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics: - Outstanding reasoning capability: State-of-the-art performance on Math reasoning, surpassing models like MiniCPM-2 and Qwen2-1.5B. - Stronger tool use: InternLM2.5 supports gathering information from more than 100 web pages, corresponding implementation has be released in MindSearch. InternLM2.5 has better tool utilization-related capabilities in instruction following, tool selection and reflection. See examples. We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool OpenCompass. The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the OpenCompass leaderboard for more evaluation results. | Benchmark | InternLM2.5-1.8B-Chat | MiniCPM-2 | Qwen2-1.5B-Instruct | | ------------------ | --------------------- | --------- | ------------------- | | MMLU (5-shot) | 50.7 | 54.2 | 55.7 | | CMMLU (5-shot) | 62.2 | 50.6 | 65.2 | | BBH (3-shot CoT) | 41.9 | 41.5 | 36.5 | | MATH (0-shot CoT) | 40.2 | 15.5 | 21.4 | | GPQA (0-shot) | 27.8 | 23.7 | 27.3 | - The evaluation results were obtained from OpenCompass, and evaluation configuration can be found in the configuration files provided by OpenCompass. - The evaluation data may have numerical differences due to the version iteration of OpenCompass, so please refer to the latest evaluation results of OpenCompass. Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. To load the InternLM2.5 1.8B Chat model using Transformers, use the following code: internlm/internlm25-18b-chat-gguf offers `internlm25-18b-chat` models in GGUF format in both half precision and various low-bit quantized versions, including `q50`, `q5km`, `q6k`, and `q80`. LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. You can run batch inference locally with the following python code: Or you can launch an OpenAI compatible server with the following command: Launch OpenAI compatible server with `vLLM>=0.3.2`: The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact . InternLM2.5 ,即书生·浦语大模型第 2.5 代,开源了面向实用场景的 18 亿参数基础模型与对话模型 (InternLM2.5-20B-Chat)。模型具有以下特点: - 卓越的推理性能:在数学推理方面取得了同量级模型最优精度,超越了 MiniCPM-2 和 Qwen2-1.5B。 - 工具调用能力整体升级:InternLM2.5 支持从上百个网页搜集有效信息进行分析推理,相关实现已开源到 MindSearch。InternLM2.5 具有更强和更具有泛化性的指令理解、工具筛选与结果反思等能力,新版模型可以更可靠地支持复杂智能体的搭建,支持对工具进行有效的多轮调用,完成较复杂的任务。可以查看更多样例。 我们使用开源评测工具 OpenCompass 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问 OpenCompass 榜单 获取更多的评测结果。 | Benchmark | InternLM2.5-1.8B-Chat | MiniCPM-2 | Qwen2-1.5B-Instruct | | ------------------ | --------------------- | --------- | ------------------- | | MMLU (5-shot) | 50.7 | 54.2 | 55.7 | | CMMLU (5-shot) | 62.2 | 50.6 | 65.2 | | BBH (3-shot CoT) | 41.9 | 41.5 | 36.5 | | MATH (0-shot CoT) | 40.2 | 15.5 | 21.4 | | GPQA (0-shot) | 27.8 | 23.7 | 27.3 | - 以上评测结果基于 OpenCompass 获得,具体测试细节可参见 OpenCompass 中提供的配置文件。 - 评测数据会因 OpenCompass 的版本迭代而存在数值差异,请以 OpenCompass 最新版的评测结果为主。 局限性: 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权(申请表)。其他问题与合作请联系 。
internlm-chat-20b
internlm2_5-7b-chat-gguf
internlm-7b
SWE-Fixer-Retriever-7B
SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution SWE-Fixer is a simple yet effective solution for addressing real-world GitHub issues by training open-source LLMs. It features a streamlined retrieve-then-edit pipeline with two core components: a code file retriever and a code editor. This repo holds the SWE-Fixer-Retriever-7B model, which is finetuned on the Qwen2.5-7B. For more information, please visit our project page.
POLAR-7B
[](./LICENSE) [](https://github.com/InternLM/xtuner/) [](https://github.com/InternLM/lmdeploy/) [](https://github.com/sgl-project/sglang/) [](https://github.com/vllm-project/vllm/) POLAR represents a significant breakthrough in scalar-based reward models achieved through large-scale pre-training. It leverages the innovative POLicy DiscriminAtive LeaRning (POLAR) paradigm——a scalable, high-level optimization objective——to effectively discriminate between policies using a large-scale synthetic corpora. Following pre-training, POLAR RMs are fine-tuned with minimal preference data, rapidly aligning with human preferences. Key features of POLAR include: Innovative Pre-training Paradigm: POLAR trains a reward model to discern identical policies and discriminate different ones. Unlike traditional reward modeling methods relying on absolute preferences, POLAR captures the relative difference between two policies, which is a scalable, high-level optimization objective suitable for modeling generic ranking relationships. Tailored for Reinforcement Fine-tuning: POLAR assigns rewards to LLM trajectories based on given references, perfectly aligning with the Reinforcement Fine-tuning (RFT) framework. POLAR provides a promising solution for applying RFT in generic scenarios. Superior Performance and Generalization: POLAR achieves state-of-the-art results on downstream reinforcement learning tasks, consistently delivering accurate and reliable reward signals that generalize effectively to unseen scenarios and significantly reducing reward hacking. Easy to Customize: Pre-trained checkpoints of POLAR are available, enabling researchers to conveniently fine-tune the RM for various customized scenarios, thus facilitating straightforward adaptation and expansion tailored to specific applications and experimental requirements. POLAR-7B-Base refers to the pre-trained-only checkpoint, ideal for customized fine-tuning according to specific preferences. The "ready-to-use" checkpoint POLAR-7B has been already fine-tuned on general preference data, making it suitable for immediate use in most scenarios. We conducted a comprehensive evaluation of POLAR-7B via the Proximal Policy Optimization (PPO) algorithm. We evaluate the downstream RL performances of four different policy models using OpenCompass. More details are available in our Paper. You could employ the latest xtuner to fine-tune and use POLAR. Xtuner is an efficient, flexible and full-featured toolkit for fine-tuning LLMs. - It is recommended to build a Python-3.10 virtual environment using conda We support reward inference through lmdeploy, sglang, and vllm. We recommend setting up a virtual environment with conda when using these inference engines to prevent potential dependency conflicts. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration and evaluate candidate trajectories by measuring their consistency with the provided reference. Reward request To load the POLAR model using transformers, use the following code to get rewards: LMDeploy is a toolkit for compressing, deploying, and serving LLMs. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration during fine-tuning, along with a chosen trajectory and a rejected trajectory. You can construct your fine-tuning data in a `train.jsonl` file, formatted as follows: - Step 0: Prepare the config. We provide examplar ready-to-use configs here. If the provided configs cannot meet the requirements, please copy the provided config and do modification following the xtuner guideline. For more details of reward model training settings, please see the xtuner reward model guideline. For example, you can start the fine-tuning of POLAR-7B-Base by Here, `--deepspeed` means using DeepSpeed to optimize the training. Xtuner comes with several integrated strategies including ZeRO-1, ZeRO-2, and ZeRO-3. If you wish to disable this feature, simply remove this argument. - Step 2: Convert the saved PTH model (if using DeepSpeed, it will be a directory) to Hugging Face model, by Code and model weights are licensed under Apache-2.0.
OREAL-7B-SFT
internlm2_5-7b-chat-1m-gguf
internlm-xcomposer-vl-7b
internlm3-8b-instruct-gguf
JanusCoderV 8B
💻Github Repo • 🤗Model Collections • 📜Technical Report We introduce JanusCoder and JanusCoderV, a suite of open-source foundational models designed to establish a unified visual-programmatic interface for code intelligence. This model suite is built upon open-source language models (such as Qwen3-8B and 14B) and multimodal models (such as Qwen2.5-VL and InternVL3.5-8B). The JanusCoder series is trained on JANUSCODE-800K—the largest multimodal code corpus to date, generated by an innovative synthesis toolkit, covering everything from standard charts to complex interactive Web UIs and code-driven animations. This enables the models to uniformly handle diverse visual-programmatic tasks, such as generating code from textual instructions, visual inputs, or a combination of both, rather than building specialized models for isolated tasks. JanusCoder excels at flexible content generation (like data visualizations and interactive front-ends) as well as precise, program-driven editing of visual effects and complex animation construction. | Model Name | Description | Download | | --- | --- | --- | | JanusCoder-8B | 8B text model based on Qwen3-8B. | 🤗 Model | | JanusCoder-14B | 14B text model based on Qwen3-14B. | 🤗 Model | | JanusCoderV-7B | 7B multimodal model based on Qwen2.5-VL-7B. | 🤗 Model | | 👉 JanusCoderV-8B | 8B multimodal model based on InternVL3.5-8B. | 🤗 Model | We evaluate the JanusCoderV model on various benchmarks that span multimodal code intelligence tasks on multiple PLs: | Model | JanusCoderV-8B | Qwen2.5VL-7B-Instruct | InternVL3-8B | InternVL3.5-8B | MiniCPM-V-2-6 | Llama3.2-11B-Vision-Instruct | GPT-4o | | --- | --- | --- | --- | --- | --- | --- | --- | | ChartMimic (Customized) | 74.20 | 58.69 | 60.04 | 59.55 | 48.18 | 39.63 | 67.42 | | DesignBench (Gen) | 68.86 | 72.73 | 69.34 | 71.73 | 66.25 | 62.24 | 76.83 | | DesignBench (Edit) | 8.63 | 6.85 | 7.76 | 8.63 | 4.56 | 6.61 | 9.23 | | WebCode2M | 18.28 | 12.83 | 12.40 | 11.95 | 9.73 | 6.57 | 13.00 | | InteractScience (Func.) | 17.60 | 8.40 | 8.93 | 11.47 | 0.13 | 6.67 | 27.20 | | InteractScience (Visual) | 33.32 | 19.83 | 53.35 | 24.17 | 7.70 | 13.24 | 46.01 | The following provides demo code illustrating how to generate text using JanusCoderV-8B. > Please use transformers >= 4.55.0 to ensure the model works normally. Citation 🫶 If you are interested in our work or find the repository / checkpoints / benchmark / data helpful, please consider using the following citation format when referencing our papers:
internlm2_5-20b-chat-gguf
internlm2_5-20b-chat
💻Github Repo • 🤔Reporting Issues • 📜Technical Report InternLM2.5 has open-sourced a 20 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics: - Outstanding reasoning capability: State-of-the-art performance on Math reasoning, surpassing models like Llama3 and Gemma2-27B. - Stronger tool use: InternLM2.5 supports gathering information from more than 100 web pages, corresponding implementation has be released in MindSearch. InternLM2.5 has better tool utilization-related capabilities in instruction following, tool selection and reflection. See examples. We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool OpenCompass. The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the OpenCompass leaderboard for more evaluation results. | Benchmark | InternLM2.5-20B-Chat | Gemma2-27B-IT | | ------------------ | -------------------- | ------------- | | MMLU (5-shot) | 73.5 | 75.0 | | CMMLU (5-shot) | 79.7 | 63.3 | | BBH (3-shot CoT) | 76.3 | 71.5 | | MATH (0-shot CoT) | 64.7 | 50.1 | | GPQA (0-shot) | 33.3 | 29.3 | - The evaluation results were obtained from OpenCompass, and evaluation configuration can be found in the configuration files provided by OpenCompass. - The evaluation data may have numerical differences due to the version iteration of OpenCompass, so please refer to the latest evaluation results of OpenCompass. Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. To load the InternLM2.5 20B Chat model using Transformers, use the following code: internlm/internlm25-20b-chat-gguf offers `internlm25-20b-chat` models in GGUF format in both half precision and various low-bit quantized versions, including `q50`, `q5km`, `q6k`, and `q80`. LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. You can run batch inference locally with the following python code: Or you can launch an OpenAI compatible server with the following command: Launch OpenAI compatible server with `vLLM>=0.3.2`: The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact . InternLM2.5 ,即书生·浦语大模型第 2.5 代,开源了面向实用场景的200亿参数基础模型与对话模型 (InternLM2.5-20B-Chat)。模型具有以下特点: - 卓越的推理性能:在数学推理方面取得了同量级模型最优精度,超越了 Llama3 和 Gemma2-27B。 - 工具调用能力整体升级:InternLM2.5 支持从上百个网页搜集有效信息进行分析推理,相关实现已开源到 MindSearch。InternLM2.5 具有更强和更具有泛化性的指令理解、工具筛选与结果反思等能力,新版模型可以更可靠地支持复杂智能体的搭建,支持对工具进行有效的多轮调用,完成较复杂的任务。可以查看更多样例。 我们使用开源评测工具 OpenCompass 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问 OpenCompass 榜单 获取更多的评测结果。 | Benchmark | InternLM2.5-20B-Chat | Gemma2-27B-IT | | ------------------ | -------------------- | ------------- | | MMLU (5-shot) | 73.5 | 75.0 | | CMMLU (5-shot) | 79.7 | 63.3 | | BBH (3-shot CoT) | 76.3 | 71.5 | | MATH (0-shot CoT) | 64.7 | 50.1 | | GPQA (0-shot) | 33.3 | 29.3 | - 以上评测结果基于 OpenCompass 获得,具体测试细节可参见 OpenCompass 中提供的配置文件。 - 评测数据会因 OpenCompass 的版本迭代而存在数值差异,请以 OpenCompass 最新版的评测结果为主。 局限性: 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权(申请表)。其他问题与合作请联系 。
internlm2_5-7b-chat-1m
internlm2_5-1_8b
JanusCoderV 7B
💻Github Repo • 🤗Model Collections • 📜Technical Report We introduce JanusCoder and JanusCoderV, a suite of open-source foundational models designed to establish a unified visual-programmatic interface for code intelligence. This model suite is built upon open-source language models (such as Qwen3-8B and 14B) and multimodal models (such as Qwen2.5-VL and InternVL3.5-8B). The JanusCoder series is trained on JANUSCODE-800K—the largest multimodal code corpus to date, generated by an innovative synthesis toolkit, covering everything from standard charts to complex interactive Web UIs and code-driven animations. This enables the models to uniformly handle diverse visual-programmatic tasks, such as generating code from textual instructions, visual inputs, or a combination of both, rather than building specialized models for isolated tasks. JanusCoder excels at flexible content generation (like data visualizations and interactive front-ends) as well as precise, program-driven editing of visual effects and complex animation construction. | Model Name | Description | Download | | --- | --- | --- | | JanusCoder-8B | 8B text model based on Qwen3-8B. | 🤗 Model | | JanusCoder-14B | 14B text model based on Qwen3-14B. | 🤗 Model | | 👉 JanusCoderV-7B | 7B multimodal model based on Qwen2.5-VL-7B. | 🤗 Model | | JanusCoderV-8B | 8B multimodal model based on InternVL3.5-8B. | 🤗 Model | We evaluate the JanusCoderV model on various benchmarks that span multimodal code intelligence tasks on multiple PLs: | Model | JanusCoderV-7B | Qwen2.5VL-7B-Instruct | InternVL3-8B | InternVL3.5-8B | MiniCPM-V-2-6 | Llama3.2-11B-Vision-Instruct | GPT-4o | | --- | --- | --- | --- | --- | --- | --- | --- | | ChartMimic (Customized) | 72.77 | 58.69 | 60.04 | 59.55 | 48.18 | 39.63 | 67.42 | | DesignBench (Gen) | 73.31 | 72.73 | 69.34 | 71.73 | 66.25 | 62.24 | 76.83 | | DesignBench (Edit) | 8.79 | 6.85 | 7.76 | 8.63 | 4.56 | 6.61 | 9.23 | | WebCode2M | 26.21 | 12.83 | 12.40 | 11.95 | 9.73 | 6.57 | 13.00 | | InteractScience (Func.) | 17.73 | 8.40 | 8.93 | 11.47 | 0.13 | 6.67 | 27.20 | | InteractScience (Visual) | 27.67 | 19.83 | 53.35 | 24.17 | 7.70 | 13.24 | 46.01 | The following provides demo code illustrating how to generate text using JanusCoderV-7B. > Please use transformers >= 4.55.0 to ensure the model works normally. Citation 🫶 If you are interested in our work or find the repository / checkpoints / benchmark / data helpful, please consider using the following citation format when referencing our papers:
internlm2-math-7b
Intern-S1-mini-GGUF
internlm2-math-plus-20b
CapRL-Qwen3VL-2B-GGUF
internlm2-math-plus-1_8b
internlm2-math-base-7b
internlm2_5-step-prover
internlm2-step-prover
CapRL-Qwen3VL-4B-GGUF
JanusCoder 8B
💻Github Repo • 🤗Model Collections • 📜Technical Report We introduce JanusCoder and JanusCoderV, a suite of open-source foundational models designed to establish a unified visual-programmatic interface for code intelligence. This model suite is built upon open-source language models (such as Qwen3-8B and 14B) and multimodal models (such as Qwen2.5-VL and InternVL3.5-8B). The JanusCoder series is trained on JANUSCODE-800K—the largest multimodal code corpus to date, generated by an innovative synthesis toolkit, covering everything from standard charts to complex interactive Web UIs and code-driven animations. This enables the models to uniformly handle diverse visual-programmatic tasks, such as generating code from textual instructions, visual inputs, or a combination of both, rather than building specialized models for isolated tasks. JanusCoder excels at flexible content generation (like data visualizations and interactive front-ends) as well as precise, program-driven editing of visual effects and complex animation construction. | Model Name | Description | Download | | --- | --- | --- | | 👉 JanusCoder-8B | 8B text model based on Qwen3-8B. | 🤗 Model | | JanusCoder-14B | 14B text model based on Qwen3-14B. | 🤗 Model | | JanusCoderV-7B | 7B multimodal model based on Qwen2.5-VL-7B. | 🤗 Model | | JanusCoderV-8B | 8B multimodal model based on InternVL3.5-8B. | 🤗 Model | We evaluate the JanusCoder model on various benchmarks that span code interlligence tasks on multiple PLs: | Model | JanusCoder-8B | Qwen3-8B | Qwen2.5-Coder-7B-Instruct | LLaMA3-8B-Instruct | GPT-4o | | --- | --- | --- | --- | --- | --- | | PandasPlotBench (Task) | 80 | 74 | 76 | 69 | 85 | | ArtifactsBench | 39.6 | 36.5 | 26.0 | 36.5 | 37.9 | | DTVBench (Manim) | 9.70 | 6.20 | 8.56 | 4.92 | 10.60 | | DTVBench (Wolfram) | 6.07 | 5.18 | 4.04 | 3.15 | 5.97 | The following provides demo code illustrating how to generate text using JanusCoder-8B. > Please use transformers >= 4.55.0 to ensure the model works normally. Citation 🫶 If you are interested in our work or find the repository / checkpoints / benchmark / data helpful, please consider using the following citation format when referencing our papers:
internlm2-chat-1_8b-sft
internlm2_5-1_8b-chat-gguf
Agent-FLAN-7b
internlm2-math-20b
internlm2-math-base-20b
internlm2_5-20b
internlm2-chat-20b-sft
internlm2-wqx-20b
CapRL Eval 3B
CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement Learning 📖 Paper | 🏠 Github |🤗 CapRL-3B Model |🤗 CapRL-InternVL3.5-8B Model | 🤗 CapRL-2M Dataset 🤗 CapRL Collection | 🤗 Daily Paper |🤗 CapRL-3B-GGUF |🤗 CapRL-3B-i1-GGUF CapRL-Eval-3B is the model used for answering questions based on captions, and it is a finetuned version of Qwen2.5-VL-3B. When dealing with tasks such as ChartQA (not multiple-choice questions), it provides more stable output formatting. Introduction We are excited to introduce CapRL-3B, a lightweight 3B image captioner that achieves perception capabilities comparable to Qwen2.5-VL-72B. This is the first study of applying Reinforcement Learning with Verifiable Rewards for the open-ended and subjective image captioning task. Unlike traditional Supervised Fine-Tuning, which can lead to models memorizing a limited set of annotated captions, our method allows the model to explore and generate a broader range of creative and general descriptions. CapRL is a new training paradigm featuring a decoupled two-stage pipeline. The initial stage uses LVLMs to generate rich and accurate captions. Subsequently, the second stage evaluates caption quality by using a vision-only LLM to perform the QA task. We also created a specific QA curation pipeline to ensure the quality of the questions and answers used for the second stage. By employing CapRL training framework, initializing with the Qwen2.5-VL-3B model, and using a carefully filtered 75K QA dataset as the training set, we obtained a highly capable captioner, CapRL-3B. Key Features Remarkable visual understanding for Chart, Infographics and Document: CapRL-3B achieves perception accuracy and visual information coverage comparable to Qwen2.5-VL-72B. Well-organized output: The outputs of CapRL-3B are relatively well-structured, making them clear and easy to understand. Detailed description for natural images: The outputs of CapRL-3B can perfectly cover all valid visual information while containing fewer hallucinations. Usage If you want to use CapRL-3B for captioning, you can directly follow the exact same inference approach as in Qwen2.5-VL-series. Run the command below to start an OpenAI-compatible API service: Then you can use the chat API as below: (see OpenAI API protocol document for more details):
Spark VL 7B
⭐ If you find our code or model helpful, please consider giving us a star — your support means a lot! 🏠 Github repository 📖 Daily Paper 🤗 models 📖 Paper We propose SPARK, a unified framework that integrates policy and reward into a single model for joint and synchronous training. SPARK can automatically derive reward and reflection data from verifiable reward, enabling self-learning and self-evolution. Furthermore, we instantiate this framework on multiple backbones, training SPARK-VL-7B, SPARK-7B, and SPARK-VL-32B. This repo is the SPARK-VL-7B. 📢 News - 🚀 [09/29/2025] We release our 🤗 datasets . - 🚀 [09/29/2025] We release our Spark's 📖 Paper . - 🚀 [09/29/2025] We upload our evaluation code and 🤗 models . - 🚀 [09/29/2025] We release Spark 🏠 Github repository . 💡 Highlights - 🔥 Synergistic Policy–Reward Co-Evolving (SPARK): We introduce SPARK, a unified reinforcement fine-tuning framework that jointly optimizes policy and reward within a single model through on-policy co-evolution.. - 🔥 Recycling Rollouts: Unlike conventional RL pipelines that discard rollouts after policy updates, SPARK recycles RLVR rollouts into pointwise, pairwise, and reflection objectives, enabling the model itself to act as both a strong policy and a generative reward model. - 🔥 Co-Evolving Mechanism: Improved reward accuracy provides better gradients for policy learning, while stronger reasoning further refines reward judgment, forming a positive feedback loop that enhances reasoning, judgment, and reflection in synergy. - 🔥 Efficient and Practical: SPARK requires no human preference data, teacher models, or external reward models, making it significantly more data- and compute-efficient than traditional RM-based RL pipelines. Our model is based on Qwen2.5-VL-7B-Instruct. You can use the same code as the Qwen2.5-VL-7B-Instruct model for inference, referring to 🤗Huggingface . We recommend using vLLM for faster inference speed. Using vLLM leads to significant speed improvements in dataset evaluation. Spark Training After downloading the dataset, you can start training using the following example bash script. Our bash scripts are in You need to modify the dataset paths and model paths to your own locations. Evaluation The integrated multimodal mathematics dataset can be downloaded from 🤗 datasets and evaluated using the scripts provided in the `Evaluation` folder. The evaluation results will be stored, and accuracy can subsequently be computed with the `calculateacc.py` file. 📄 License Usage and License Notices: The data and code are intended and licensed for research use only. License: Attribution-NonCommercial 4.0 International It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use Acknowledgement We sincerely thank projects lmm-r1 and OpenRLHF for providing their open-source resources.
internlm2-7b-reward
OREAL-32B
OREAL-7B
internlm-xcomposer2-vl-1_8b
internlm2-wqx-vl-clip
internlm2-20b-reward
internlm2-chat-7b-4bits
SWE-Fixer-Editor-72B
SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution SWE-Fixer is a simple yet effective solution for addressing real-world GitHub issues by training open-source LLMs. It features a streamlined retrieve-then-edit pipeline with two core components: a code file retriever and a code editor. This repo holds the SWE-Fixer-Editor-72B model, which is finetuned on the Qwen2.5-7B. For more information, please visit our project page.
OREAL-DeepSeek-R1-Distill-Qwen-7B
OREAL-32B-SFT
AlchemistCoder-DS-6.7B
AlchemistCoder-L-7B
internlm2-math-plus-mixtral8x22b
Visual-ERM
AlchemistCoder-CL-7B
POLAR-1_8B
[](./LICENSE) [](https://github.com/InternLM/xtuner/) [](https://github.com/InternLM/lmdeploy/) [](https://github.com/sgl-project/sglang/) [](https://github.com/vllm-project/vllm/) POLAR represents a significant breakthrough in scalar-based reward models achieved through large-scale pre-training. It leverages the innovative POLicy DiscriminAtive LeaRning (POLAR) paradigm——a scalable, high-level optimization objective——to effectively discriminate between policies using a large-scale synthetic corpora. Following pre-training, POLAR RMs are fine-tuned with minimal preference data, rapidly aligning with human preferences. Key features of POLAR include: Innovative Pre-training Paradigm: POLAR trains a reward model to discern identical policies and discriminate different ones. Unlike traditional reward modeling methods relying on absolute preferences, POLAR captures the relative difference between two policies, which is a scalable, high-level optimization objective suitable for modeling generic ranking relationships. Tailored for Reinforcement Fine-tuning: POLAR assigns rewards to LLM trajectories based on given references, perfectly aligning with the Reinforcement Fine-tuning (RFT) framework. POLAR provides a promising solution for applying RFT in generic scenarios. Superior Performance and Generalization: POLAR achieves state-of-the-art results on downstream reinforcement learning tasks, consistently delivering accurate and reliable reward signals that generalize effectively to unseen scenarios and significantly reducing reward hacking. Easy to Customize: Pre-trained checkpoints of POLAR are available, enabling researchers to conveniently fine-tune the RM for various customized scenarios, thus facilitating straightforward adaptation and expansion tailored to specific applications and experimental requirements. POLAR-1.8B-Base refers to the pre-trained-only checkpoint, ideal for customized fine-tuning according to specific preferences. The "ready-to-use" checkpoint POLAR-1.8B has been already fine-tuned on general preference data, making it suitable for immediate use in most scenarios. We conducted a comprehensive evaluation of POLAR-1.8B via the Proximal Policy Optimization (PPO) algorithm. We evaluate the downstream RL performances of four different policy models using OpenCompass. More details are available in our Paper. You could employ the latest xtuner to fine-tune and use POLAR. Xtuner is an efficient, flexible and full-featured toolkit for fine-tuning LLMs. - It is recommended to build a Python-3.10 virtual environment using conda We support reward inference through lmdeploy, sglang, and vllm. We recommend setting up a virtual environment with conda when using these inference engines to prevent potential dependency conflicts. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration and evaluate candidate trajectories by measuring their consistency with the provided reference. Reward request To load the POLAR model using transformers, use the following code to get rewards: LMDeploy is a toolkit for compressing, deploying, and serving LLMs. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration during fine-tuning, along with a chosen trajectory and a rejected trajectory. You can construct your fine-tuning data in a `train.jsonl` file, formatted as follows: - Step 0: Prepare the config. We provide examplar ready-to-use configs here. If the provided configs cannot meet the requirements, please copy the provided config and do modification following the xtuner guideline. For more details of reward model training settings, please see the xtuner reward model guideline. For example, you can start the fine-tuning of POLAR-18B-Base by Here, `--deepspeed` means using DeepSpeed to optimize the training. Xtuner comes with several integrated strategies including ZeRO-1, ZeRO-2, and ZeRO-3. If you wish to disable this feature, simply remove this argument. - Step 2: Convert the saved PTH model (if using DeepSpeed, it will be a directory) to Hugging Face model, by Code and model weights are licensed under Apache-2.0.
internlm3-8b-instruct-smoothquant-int8
internlm2_5-step-prover-critic
internlm-xcomposer2d5-7b-chat
InternLM-XComposer2.5-Chat is a chat model trained on internlm/internlm-xcomposer2d5-7b, offers improved multi-modal instruction following and open-ended dialogue capabilities. Import from Transformers To load the InternLM-XComposer2-2d5-Chat model using Transformers, use the following code: We provide a simple example to show how to use InternLM-XComposer2.5 with 🤗 Transformers. Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
Intern-S1-GGUF
internlm-chat-20b-4bit
internlm3-8b-instruct-gptq-int4
internlm-xcomposer2-7b-4bit
POLAR-7B-Base
[](./LICENSE) [](https://github.com/InternLM/xtuner/) [](https://github.com/InternLM/lmdeploy/) [](https://github.com/sgl-project/sglang/) [](https://github.com/vllm-project/vllm/) POLAR represents a significant breakthrough in scalar-based reward models achieved through large-scale pre-training. It leverages the innovative POLicy DiscriminAtive LeaRning (POLAR) paradigm——a scalable, high-level optimization objective——to effectively discriminate between policies using a large-scale synthetic corpora. Following pre-training, POLAR RMs are fine-tuned with minimal preference data, rapidly aligning with human preferences. Key features of POLAR include: Innovative Pre-training Paradigm: POLAR trains a reward model to discern identical policies and discriminate different ones. Unlike traditional reward modeling methods relying on absolute preferences, POLAR captures the relative difference between two policies, which is a scalable, high-level optimization objective suitable for modeling generic ranking relationships. Tailored for Reinforcement Fine-tuning: POLAR assigns rewards to LLM trajectories based on given references, perfectly aligning with the Reinforcement Fine-tuning (RFT) framework. POLAR provides a promising solution for applying RFT in generic scenarios. Superior Performance and Generalization: POLAR achieves state-of-the-art results on downstream reinforcement learning tasks, consistently delivering accurate and reliable reward signals that generalize effectively to unseen scenarios and significantly reducing reward hacking. Easy to Customize: Pre-trained checkpoints of POLAR are available, enabling researchers to conveniently fine-tune the RM for various customized scenarios, thus facilitating straightforward adaptation and expansion tailored to specific applications and experimental requirements. POLAR-7B-Base refers to the pre-trained-only checkpoint, ideal for customized fine-tuning according to specific preferences. The "ready-to-use" checkpoint POLAR-7B has been already fine-tuned on general preference data, making it suitable for immediate use in most scenarios. We conducted a comprehensive evaluation of POLAR-7B via the Proximal Policy Optimization (PPO) algorithm. We evaluate the downstream RL performances of four different policy models using OpenCompass. More details are available in our Paper. You could employ the latest xtuner to fine-tune and use POLAR. Xtuner is an efficient, flexible and full-featured toolkit for fine-tuning LLMs. - It is recommended to build a Python-3.10 virtual environment using conda We support reward inference through lmdeploy, sglang, and vllm. We recommend setting up a virtual environment with conda when using these inference engines to prevent potential dependency conflicts. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration and evaluate candidate trajectories by measuring their consistency with the provided reference. Reward request To load the POLAR model using transformers, use the following code to get rewards: LMDeploy is a toolkit for compressing, deploying, and serving LLMs. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration during fine-tuning, along with a chosen trajectory and a rejected trajectory. You can construct your fine-tuning data in a `train.jsonl` file, formatted as follows: - Step 0: Prepare the config. We provide examplar ready-to-use configs here. If the provided configs cannot meet the requirements, please copy the provided config and do modification following the xtuner guideline. For more details of reward model training settings, please see the xtuner reward model guideline. For example, you can start the fine-tuning of POLAR-7B-Base by Here, `--deepspeed` means using DeepSpeed to optimize the training. Xtuner comes with several integrated strategies including ZeRO-1, ZeRO-2, and ZeRO-3. If you wish to disable this feature, simply remove this argument. - Step 2: Convert the saved PTH model (if using DeepSpeed, it will be a directory) to Hugging Face model, by Code and model weights are licensed under Apache-2.0.
internlm2_5-20b-chat-4bit-gptq
internlm3-8b-instruct-awq
internlm3-8b-instruct-smoothquant-fp8
internlm2-wqx-vl-20b
internlm2_5-7b-chat-4bit
POLAR-1_8B-Base
[](./LICENSE) [](https://github.com/InternLM/xtuner/) [](https://github.com/InternLM/lmdeploy/) [](https://github.com/sgl-project/sglang/) [](https://github.com/vllm-project/vllm/) We offer a novel perspective on reward modeling by formulating it as a policy discriminator, which quantifies the difference between two policies to generate a reward signal, guiding the training policy towards a target policy with desired behaviors. Based on this conceptual insight, we propose a scalable pre-training method named Policy Discriminative Learning (POLAR), which trains a reward model (RM) to discern identical policies and discriminate different ones. Unlike traditional reward modeling methods relying on absolute preferences, POLAR captures the relative difference between one policy and an arbitrary target policy, which is a scalable, high-level optimization objective suitable for modeling generic ranking relationships. Leveraging the POLAR pre-training paradigm, we present a series of RMs with parameter scales from 1.8B to 7B. Empirical results show that POLAR substantially outperforms traditional non-pre-trained methods, significantly enhancing RM performance. For instance, POLAR-7B could improve preference accuracy from 54.8% to 81.0% on STEM tasks and from 57.9% to 85.5% on creative writing tasks compared to SOTA baselines. POLAR also shows robust generalization capabilities in RLHF using Reinforcement Fine-tuning (RFT), providing reliable reward signals and markedly enhancing policy performance--improving LLaMa3.1-8B from an average of 47.36% to 56.33% and Qwen2.5-32B from 64.49% to 70.47% on 20 benchmarks. Moreover, scaling experiments reveal a clear power-law relationship between computation and performance, supported by linear correlation coefficients approaching 0.99. The impressive performance, strong generalization, and scaling properties suggest that POLAR is a promising direction for developing general and strong reward models. POLAR represents a significant breakthrough in scalar-based reward models achieved through large-scale pre-training. It leverages the innovative POLicy DiscriminAtive LeaRning (POLAR) paradigm——a scalable, high-level optimization objective——to effectively discriminate between policies using a large-scale synthetic corpora. Following pre-training, POLAR RMs are fine-tuned with minimal preference data, rapidly aligning with human preferences. Key features of POLAR include: Innovative Pre-training Paradigm: POLAR trains a reward model to discern identical policies and discriminate different ones. Unlike traditional reward modeling methods relying on absolute preferences, POLAR captures the relative difference between two policies, which is a scalable, high-level optimization objective suitable for modeling generic ranking relationships. Tailored for Reinforcement Fine-tuning: POLAR assigns rewards to LLM trajectories based on given references, perfectly aligning with the Reinforcement Fine-tuning (RFT) framework. POLAR provides a promising solution for applying RFT in generic scenarios. Superior Performance and Generalization: POLAR achieves state-of-the-art results on downstream reinforcement learning tasks, consistently delivering accurate and reliable reward signals that generalize effectively to unseen scenarios and significantly reducing reward hacking. Easy to Customize: Pre-trained checkpoints of POLAR are available, enabling researchers to conveniently fine-tune the RM for various customized scenarios, thus facilitating straightforward adaptation and expansion tailored to specific applications and experimental requirements. POLAR-1.8B-Base refers to the pre-trained-only checkpoint, ideal for customized fine-tuning according to specific preferences. The "ready-to-use" checkpoint POLAR-1.8B has been already fine-tuned on general preference data, making it suitable for immediate use in most scenarios. We conducted a comprehensive evaluation of POLAR-1.8B via the Proximal Policy Optimization (PPO) algorithm. We evaluate the downstream RL performances of four different policy models using OpenCompass. More details are available in our Paper. You could employ the latest xtuner to fine-tune and use POLAR. Xtuner is an efficient, flexible and full-featured toolkit for fine-tuning LLMs. - It is recommended to build a Python-3.10 virtual environment using conda We support reward inference through lmdeploy, sglang, and vllm. We recommend setting up a virtual environment with conda when using these inference engines to prevent potential dependency conflicts. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration and evaluate candidate trajectories by measuring their consistency with the provided reference. Reward request To load the POLAR model using transformers, use the following code to get rewards: LMDeploy is a toolkit for compressing, deploying, and serving LLMs. Unlike traditional reward models, POLAR requires an additional reference trajectory as a demonstration during fine-tuning, along with a chosen trajectory and a rejected trajectory. You can construct your fine-tuning data in a `train.jsonl` file, formatted as follows: - Step 0: Prepare the config. We provide examplar ready-to-use configs here. If the provided configs cannot meet the requirements, please copy the provided config and do modification following the xtuner guideline. For more details of reward model training settings, please see the xtuner reward model guideline. For example, you can start the fine-tuning of POLAR-18B-Base by Here, `--deepspeed` means using DeepSpeed to optimize the training. Xtuner comes with several integrated strategies including ZeRO-1, ZeRO-2, and ZeRO-3. If you wish to disable this feature, simply remove this argument. - Step 2: Convert the saved PTH model (if using DeepSpeed, it will be a directory) to Hugging Face model, by Code and model weights are licensed under Apache-2.0.
internlm-xcomposer-7b-4bit
Spatial-SSRL-7B
SIM_COT-LLaMA3-CODI-8B
SIM_COT-LLaMA3-CODI-3B
Internlm Xcomposer2d5 Ol 7b
InternLM-XComposer2.5-OL, a comprehensive multimodal system for long-term streaming video and audio interactions. Import from Transformers To load the base LLM model using Transformers, use the following code: To load the base audio model using MS-Swift, use the following code: We provide simple examples below to show how to use InternLM-XComposer-2.5-OL with 🤗 Transformers. For complete guide, please refer to here. If you find Euclid useful for your research and applications, please cite using this BibTeX: Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
CapRL-Qwen3VL-4B
SIM_COT-LLaMA3-CODI-1B
SIM_COT-GPT2-CODI
[](https://huggingface.co/internlm/SIMCOT-LLaMA3-CODI-1B) [](https://github.com/InternLM/SIM-CoT) [](https://arxiv.org/pdf/2509.20317) Chain-of-Thought (CoT) prompting has become a widely adopted strategy for enhancing the reasoning capabilities of Large Language Models (LLMs). By decomposing problems into intermediate steps, explicit CoT improves accuracy across a variety of reasoning tasks. However, the token cost of explicit reasoning severely limits its scalability, especially when applied to long-horizon tasks or deployed under strict computational budgets. Implicit CoT methods attempt to address this issue by replacing explicit intermediate steps with continuous latent representations. These approaches achieve higher token efficiency while retaining some of the benefits of step-wise reasoning. Despite this promise, a persistent performance gap remains: implicit CoT methods often underperform compared to explicit reasoning, especially as the number of latent tokens is scaled. Our analysis identifies a fundamental latent instability problem: as more implicit reasoning tokens are introduced, training frequently becomes unstable, with latent representations collapsing into homogeneous states that lack semantic diversity. This failure is largely due to the absence of fine-grained, step-level supervision in existing approaches. To overcome this limitation, we introduce SIM-CoT, a plug-and-play training module designed to stabilize and enrich the latent reasoning space. SIM-CoT leverages an auxiliary decoder during training that aligns each implicit token with its corresponding explicit reasoning step. This step-level supervision ensures that latent states encode distinct and meaningful information. Importantly, the auxiliary decoder is removed during inference, meaning that SIM-CoT preserves the computational efficiency of implicit CoT without adding runtime overhead. Empirical results demonstrate that SIM-CoT substantially improves both in-domain accuracy and out-of-domain stability. On smaller models such as GPT-2, SIM-CoT not only boosts implicit baselines like Coconut by +8.2% but also surpasses explicit CoT by +2.1% while being 2.3× more token-efficient. On larger models, including LLaMA-3.1 8B, SIM-CoT delivers consistent gains, improving CODI by +3.0% and significantly narrowing the performance gap with explicit reasoning. These findings highlight SIM-CoT as an effective and scalable solution for advancing implicit reasoning in LLMs. SIMCOT-GPT2-CODI is a large implicit language model based on GPT2, fine-tuned with SIM-CoT (Supervised Implicit Chain-of-Thought) on top of the CODI latent reasoning framework. It is designed to improve ✨ implicit reasoning and 🧮 arithmetic multi-step problem solving across benchmarks such as GSM8K, GSM-Hard, MultiArith, and SVAMP. We evaluate SIM-CoT across both in-domain (GSM8K-Aug) and out-of-domain (GSM-Hard, MultiArith, SVAMP) benchmarks, using GPT-2, LLaMA-3.2 1B, LLaMA-3.2 3B, and LLaMA-3.1 8B as backbones, applied to both Coconut and CODI frameworks. Main results on GPT-2. We report accuracy % on in-domain (GSM8k-Aug) and out-of-domain (GSM-Hard, MultiArith, SVAMP) benchmarks. Our SIM-CoT is shown to provide accuracy gains on top of existing methods such as Coconut and CODI. Main results on LLaMA 3.2 1B. We report accuracy % on in-domain (GSM8k-Aug) and out-of-domain (GSM-Hard, MultiArith, SVAMP) benchmarks. Our SIM-CoT builds on CODI to achieve a new SOTA in implicit reasoning while setting performance comparable to explicit CoT. Main results on LLaMA 3.2 3B and 8B. We report accuracy % on in-domain (GSM8k-Aug) and out-of-domain (GSM-Hard, MultiArith, SVAMP) benchmarks. - 🏗️ Base model: GPT2 - ⚡ Fine-tuning method: LoRA (r=128, alpha=32) - 🔑 Latent reasoning: 6 latent steps, projection dimension = 768 - 🎯 Dropout: 0.0 (projection layer) - 🖥️ Precision: bf16 - 📏 Context length: 512 tokens The model integrates implicit reasoning tokens during training and inference. Unlike standard explicit CoT models, SIM-CoT encourages the model to generate latent structured thoughts that are decoded only during training, while remaining implicit during inference. - 🔬 AI-related research (reasoning, representation learning, interpretability) - 📊 Benchmarking on arithmetic reasoning datasets (e.g., GSM8K, SVAMP, MultiArith, GSM-Hard) - 🧩 Studying latent representation learning and reasoning generalization ⚠️ Not intended for deployment in production without careful alignment and safety evaluation. 2. Run the evaluation script We provide shell scripts for different backbones and datasets. For example, to evaluate on GPT2 with the SVAMP dataset, run: 3. Expected output After running, the script will print the evaluation summary. An example output format is: - test accuracy: accuracy on the specified benchmark. - average length of COT: average number of latent reasoning tokens. - average accuracy: aggregated accuracy across sampled runs.