ozone-research
Chirp-01
Chirp-3b is a high-performing 3B parameter language model crafted by the Ozone Research team. Fine-tuned from a robust base model (Qwen2.5 3B Instruct), it was trained on 50 million tokens of distilled data from GPT-4o. This compact yet powerful model delivers exceptional results, outperforming expectations on benchmarks like MMLU Pro and IFEval. Chirp-3b is an open-source effort to push the limits of what small-scale LLMs can achieve, making it a valuable tool for researchers and enthusiasts alike. - Parameters: 3 billion - Training Data: 50M tokens distilled from GPT-4o Chirp-3b excels on rigorous evaluation datasets, showcasing its strength for a 3B model. | Subject | Average Accuracy | |---------------------|------------------| | Biology | 0.6234 | | Business | 0.5032 | | Chemistry | 0.3701 | | Computer Science | 0.4268 | | Economics | 0.5284 | | Engineering | 0.3013 | | Health | 0.3900 | | History | 0.3885 | | Law | 0.2252 | | Math | 0.5736 | | Other | 0.4145 | | Philosophy | 0.3687 | | Physics | 0.3995 | | Psychology | 0.5589 | | Overall Average | 0.4320 | - Score: 72% - Improvement: 14% better than the base model. More benchmarks are in the works and will be shared soon! Access Chirp-3b here: https://huggingface.co/ozone-research/Chirp-01 The Ozone AI team is exploring additional models, including 2B and larger variants. Keep an eye out for upcoming releases! We’re eager for your input! Try Chirp-3b and let us know your thoughts, use cases, or ideas for improvement. Open an issue here or contact us via [contact method—update as needed]. A big thanks to the open-source community for driving projects like this forward. Chirp-3b is our contribution to making AI research more accessible.
0x-lite-Q4_K_M-GGUF
0x-lite
resonance-01-Q2_K-GGUF
asteroid-14b-v0.1-Q4_K_M-GGUF
llama-3.1-0x-mini-Q4_K_M-GGUF
llama-3.1-0x-mini-Q3_K_S-GGUF
ozone-ai/llama-3.1-0x-mini-Q3KS-GGUF This model was converted to GGUF format from `ozone-ai/llama-3.1-0x-mini` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
luminary-9b-Q4_K_M-GGUF
2x-lite-Q4_K_M-GGUF
2x-lite-Q2_K-GGUF
ozone-ai/2x-lite-Q2K-GGUF This model was converted to GGUF format from `ozone-ai/2x-lite` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
2x-lite
Reverb-14b
asteroid-14b-v0.1
luminary-9b
bfb-1
llama-3.1-0x-mini-Q2_K-GGUF
ozone-ai/llama-3.1-0x-mini-Q2K-GGUF This model was converted to GGUF format from `ozone-ai/llama-3.1-0x-mini` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).