OpenBuddy
openbuddy-yi1.5-34b-v21.6-32k-fp16
openbuddy-yi1.5-9b-v21.1-32k
openbuddy-deepseek-67b-v18.1-4k
openbuddy-mistral-22b-v21.1-32k
openbuddy-openllama-7b-v12-bf16
openbuddy-mistral-7b-v13-base
openbuddy-openllama-13b-v7-fp16
openbuddy-llama2-13b-v8.1-fp16
openbuddy-llama-65b-v8-bf16
openbuddy-atom-13b-v9-bf16
openbuddy-falcon-180b-v13-preview0
openbuddy-llama2-70b-v10.1-bf16
openbuddy-falcon-40b-v16.1-4k
openbuddy-mixtral-8x7b-v16.2-32k
openbuddy-codellama2-34b-v11.1-bf16
openbuddy-deepseek-67b-v15.2
openbuddy-llemma-34b-v13.1
openbuddy-deepseek-67b-v15.1
openbuddy-mixtral-7bx8-v16.3-32k
openbuddy-openllama-3b-v10-bf16
openbuddy-mixtral-8x7b-v15.4
openbuddy-deepseekcoder-33b-v16.1-32k
openbuddy-mistral-7b-v13.1
openbuddy-llama2-13b-v11-bf16
openbuddy-llama2-13b-v11.1-bf16
openbuddy-mixtral-8x7b-v16.1-32k
openbuddy-deepseek-67b-v15-base
openbuddy-mistral-7b-v13
SimpleChat-14B-V1-Q4_K_M-GGUF
openbuddy-zephyr-7b-v14.1
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy Base model: https://huggingface.co/HuggingFaceH4/zephyr-7b-beta All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-deepseek-10b-v17.1-4k
openbuddy-llama3.1-8b-v22.3-131k-Q4_K_M-GGUF
SimpleChat-72B-V3-QAT-GGUF
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: Distinct Chat Style: Designed to be concise, rational, and empathetic. Specifically built for casual, everyday conversations. Enhanced Creativity: Boosts the creativity of its generated content and its capacity for emotional understanding. This is achieved by distilling knowledge from advanced models, including K2. Efficient Reasoning within a Non-CoT Framework: Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. Known Trade-off: Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-llama3.2-1b-v23.1-131k-Q4_K_M-GGUF
openbuddy-gguf
SimpleChat-14B-V1
OpenBuddy-R10528Distill-30BA3B-Preview2-Q4_K_M-GGUF
ff670/OpenBuddy-R10528Distill-30BA3B-Preview2-Q4KM-GGUF This model was converted to GGUF format from `OpenBuddy/OpenBuddy-R10528Distill-30BA3B-Preview2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
SimpleChat-4B-V1
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: Distinct Chat Style: Designed to be concise, rational, and empathetic. Specifically built for casual, everyday conversations. Enhanced Creativity: Boosts the creativity of its generated content and its capacity for emotional understanding. This is achieved by distilling knowledge from advanced models, including K2. Efficient Reasoning within a Non-CoT Framework: Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. Known Trade-off: Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-mixtral-7bx8-v18.1-32k-gptq
openbuddy-qwen2.5coder-32b-v24.1q-200k-gguf
openbuddy-llama3.3-70b-v24.2q-gguf
SimpleChat 72B V4 Apache2.0
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: Distinct Chat Style: Designed to be concise, rational, and empathetic. Specifically built for casual, everyday conversations. Enhanced Creativity: Boosts the creativity of its generated content and its capacity for emotional understanding. This is achieved by distilling knowledge from advanced models, including K2. Efficient Reasoning within a Non-CoT Framework: Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. Known Trade-off: Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
SimpleChat-32B-V1
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: Distinct Chat Style: Designed to be concise, rational, and empathetic. Specifically built for casual, everyday conversations. Enhanced Creativity: Boosts the creativity of its generated content and its capacity for emotional understanding. This is achieved by distilling knowledge from advanced models, including K2. Efficient Reasoning within a Non-CoT Framework: Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. Known Trade-off: Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-nemotron-70b-v23.2-131k-gguf
openbuddy-qwen2.5llamaify-7b-v23.1-200k-Q4_K_M-GGUF
openbuddy-7b-v1.0-bf16-enc
OpenBuddy-R10528DistillQwen-14B-v27.3-200K
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview2-QAT
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-mistral2-7b-v20.3-32k
OpenBuddy-R10528DistillQwen-72B-Preview3
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: Please note that Qwen3's `/nothink` may not be supported due to the training setting. For use cases with CoT disabled, NoCoT models are recommended. You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-qwen1.5-32b-v21.1-32k
openbuddy-thinker-32b-v26-preview
This is a specialized "thinker" model that can operate in different reasoning modes: detailed step-by-step thinking, direct responses without reasoning, or a combination of both in the same conversation. This behavior can be controlled by the system prompt. Learn more at: https://github.com/OpenBuddy/OpenBuddy/blob/main/thinker.md GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按"原样"提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview5-QAT-200K
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview7-QAT-200K
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: Please note that Qwen3's `/nothink` may not be supported due to the training setting. For use cases with CoT disabled, NoCoT models are recommended. You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-WorldPM-72B-Base
This model is the result of continued pre-training on WorldPM-72B, using a multilingual dataset of mixed code and text. We are providing this model to the community to serve as a "base" model for further SFT, this model is not intended for direct inference.
openbuddy-qwen2.5llamaify-14b-v23.3-200k-Q4_K_M-GGUF
OpenBuddy-R10528Distill-30BA3B-Preview0
OpenBuddy-R10528Distill-30BA3B-Preview2
openbuddy-deepseek-67b-v18.1-4k-gptq
openbuddy-qwen1.5-32b-v21.2-32k
openbuddy-qwen2.5coder-32b-v24.1q-200k
⚛️ Q Model: Optimized for Enhanced Quantized Inference Capability This model has been specially optimized to improve the performance of quantized inference and is recommended for use in 3 to 8-bit quantization scenarios. Quantized version: https://huggingface.co/OpenBuddy/openbuddy-qwen2.5coder-32b-v24.1q-200k-gguf GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-R10528DistillQwen-72B-Preview4
SimpleChat-30BA3B-V1
OpenBuddy-MuonQwen3-4B-v27.1
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview3-QAT
OpenBuddy-R10528DistillLlama-70B-Preview1
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy Base Model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3 We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2
This model is the result of continued pre-training on Qwen3-Coder-30B-A3B-Instruct, using a multilingual dataset of mixed code and text. We are providing this model to the community to serve as a "base" model for further SFT, this model is not intended for direct inference.
OpenBuddy-Qwen3-Base-v26
The missing "base model" of Qwen3-32B. This model serves as the foundation for our R1-0528 distillation work. This model is the result of continued pre-training on Qwen3-32B, using a multilingual dataset of mixed code and text. The purpose of training this model is to provide a model that is close to a "pre-trained" state, reducing the influence of the original Qwen3's linguistic style on subsequent fine-tuning efforts. We are providing this model to the community to serve as a base model for further SFT, this model is not intended for direct inference.
openbuddy-llama3.3-70b-v24.3-131k
OpenBuddy-R10528DistillQwen-72B-Preview1
openbuddy-llama3.2-3b-v23.2-131k-Q4_K_M-GGUF
SimpleChat-30BA3B-V3
SimpleChat-30BA3B-V2
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: Distinct Chat Style: Designed to be concise, rational, and empathetic. Specifically built for casual, everyday conversations. Enhanced Creativity: Boosts the creativity of its generated content and its capacity for emotional understanding. This is achieved by distilling knowledge from advanced models, including K2. Efficient Reasoning within a Non-CoT Framework: Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. Known Trade-off: Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-gemma-7b-v19.1-4k
OpenBuddy-R10528DistillQwen-14B-v27.1
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview4-QAT-200K
OpenBuddy-Qwen3-14B-v27.3-NoCoT
OpenBuddy-R10528DistillQwen-14B-v27.4-200K
SimpleChat-72B-V1
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: Distinct Chat Style: Designed to be concise, rational, and empathetic. Specifically built for casual, everyday conversations. Enhanced Creativity: Boosts the creativity of its generated content and its capacity for emotional understanding. This is achieved by distilling knowledge from advanced models, including K2. Efficient Reasoning within a Non-CoT Framework: Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. Known Trade-off: Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-R10528Distill-30BA3B-Preview1
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: Please note that Qwen3's `/nothink` may not be supported due to the training setting. For use cases with CoT disabled, NoCoT models are recommended. You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-MuonQwen3-4B-v27-Base
OpenBuddy-R10528DistillQwen-4B-v27.2
SimpleChat-30BA3B-V1-Q4_K_M-GGUF
ff670/SimpleChat-30BA3B-V1-Q4KM-GGUF This model was converted to GGUF format from `OpenBuddy/SimpleChat-30BA3B-V1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
SimpleTranslate-30BA3B-V1
OpenBuddy-Qwen3-Coder-30B-A3B-Base
This model is the result of continued pre-training on Qwen3-Coder-30B-A3B-Instruct, using a multilingual dataset of mixed code and text. We are providing this model to the community to serve as a "base" model for further SFT, this model is not intended for direct inference.
openbuddy-zen-3b-v21.1-32k
OpenBuddy-R1-0528-Distill-Qwen2.5-72B-Preview0
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-nemotron-70b-v23.2q-131k
openbuddy-r1-32b-v25.1-200k
openbuddy-thinker-70b-v26-preview
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview1-QAT-AWQ4
openbuddy-qwen1.5-14b-v21.1-32k
openbuddy-qwq-32b-v24.1q-gguf
⚛️ Q Model: Optimized for Enhanced Quantized Inference Capability This model has been specially optimized to improve the performance of quantized inference and is recommended for use in 3 to 8-bit quantization scenarios. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-R10528DistillLlama-70B-Preview0
openbuddy-falcon-7b-v5-fp16
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview0-QAT
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-llama-7b-v4-fp16
openbuddy-stablelm-3b-v13
openbuddy-mistral-7b-v17.1-32k
openbuddy-13b-v1.3-fp16
openbuddy-qwq-32b-v25.2q-200k
⚛️ Q Model: Optimized for Enhanced Quantized Inference Capability This model has been specially optimized to improve the performance of quantized inference and is recommended for use in 3 to 8-bit quantization scenarios. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
OpenBuddy-R10528DistillQwen-14B-v27.2-LongCoTRL
openbuddy-deepseekcoder-6b-v16.1-32k
openbuddy-mistral-7b-v18.1-128k
openbuddy-mistral2-7b-v20.2-32k
openbuddy-mixtral-8x7b-v15.3
openbuddy-mistral-7b-v19.1-4k
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview6-QAT-200K
openbuddy-coder-15b-v10-bf16
openbuddy-gemma-7b-v18.1-4k
CoTGen-72B-V1
CoTGen-32B-V1
SimpleChat-70B-V1
SimpleChat-72B-V2
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: Distinct Chat Style: Designed to be concise, rational, and empathetic. Specifically built for casual, everyday conversations. Enhanced Creativity: Boosts the creativity of its generated content and its capacity for emotional understanding. This is achieved by distilling knowledge from advanced models, including K2. Efficient Reasoning within a Non-CoT Framework: Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. Known Trade-off: Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy This model supports a Qwen3-like prompt format, with following system prompt recommended: You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-mixtral-7bx8-v19.1-32k
openbuddy-llama3.1-70b-v22.2-131k
openbuddy-qwen2.5llamaify-14b-v23.2-200k
openbuddy-llama3.1-70b-v23.1-131k
openbuddy-llama3.1-70b-v23.2-131k-sftonly
openbuddy-falcon3-10b-v24.1-200k
openbuddy-qwq-32b-v24.3
openbuddy-r1-70b-v25.1
openbuddy-thinker-72b-v25.1-200k
openbuddy-qwq-32b-v25.2q-200k-Q4_K_M-GGUF
ff670/openbuddy-qwq-32b-v25.2q-200k-Q4KM-GGUF This model was converted to GGUF format from `OpenBuddy/openbuddy-qwq-32b-v25.2q-200k` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
openbuddy-deepseekprover-7b-v26-preview
openbuddy-llama-ggml
openbuddy-ggml
openbuddy-llama-30b-v7.1-bf16
openbuddy-7b-v1.3-bf16
openbuddy-falcon-7b-v6-bf16
openbuddy-mistral-10b-v17.1-32k
openbuddy-llama3.1-8b-v22.1-131k
openbuddy-30b-ggml
openbuddy-llama2-13b-v15p1-64k
openbuddy-openllama-7b-v5-fp16
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview1-QAT
GitHub and Usage Guide: https://github.com/OpenBuddy/OpenBuddy We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like ` `, ` ` and ` `. This format is also defined in `tokenizerconfig.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openbuddy-7b-v1.1-bf16-enc
openbuddy-llama2-13b64k-v15
OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview1-QAT-Q3_K_M-GGUF
ff670/OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview1-QAT-Q3KM-GGUF This model was converted to GGUF format from `OpenBuddy/OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview1-QAT` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).