ozone-research

18 models • 1 total models in database
Sort by:

Chirp-01

Chirp-3b is a high-performing 3B parameter language model crafted by the Ozone Research team. Fine-tuned from a robust base model (Qwen2.5 3B Instruct), it was trained on 50 million tokens of distilled data from GPT-4o. This compact yet powerful model delivers exceptional results, outperforming expectations on benchmarks like MMLU Pro and IFEval. Chirp-3b is an open-source effort to push the limits of what small-scale LLMs can achieve, making it a valuable tool for researchers and enthusiasts alike. - Parameters: 3 billion - Training Data: 50M tokens distilled from GPT-4o Chirp-3b excels on rigorous evaluation datasets, showcasing its strength for a 3B model. | Subject | Average Accuracy | |---------------------|------------------| | Biology | 0.6234 | | Business | 0.5032 | | Chemistry | 0.3701 | | Computer Science | 0.4268 | | Economics | 0.5284 | | Engineering | 0.3013 | | Health | 0.3900 | | History | 0.3885 | | Law | 0.2252 | | Math | 0.5736 | | Other | 0.4145 | | Philosophy | 0.3687 | | Physics | 0.3995 | | Psychology | 0.5589 | | Overall Average | 0.4320 | - Score: 72% - Improvement: 14% better than the base model. More benchmarks are in the works and will be shared soon! Access Chirp-3b here: https://huggingface.co/ozone-research/Chirp-01 The Ozone AI team is exploring additional models, including 2B and larger variants. Keep an eye out for upcoming releases! We’re eager for your input! Try Chirp-3b and let us know your thoughts, use cases, or ideas for improvement. Open an issue here or contact us via [contact method—update as needed]. A big thanks to the open-source community for driving projects like this forward. Chirp-3b is our contribution to making AI research more accessible.

NaNK
14
14

0x-lite-Q4_K_M-GGUF

llama-cpp
12
16

0x-lite

NaNK
license:apache-2.0
11
64

resonance-01-Q2_K-GGUF

llama-cpp
8
0

asteroid-14b-v0.1-Q4_K_M-GGUF

NaNK
llama-cpp
7
0

llama-3.1-0x-mini-Q4_K_M-GGUF

llama-cpp
6
1

llama-3.1-0x-mini-Q3_K_S-GGUF

ozone-ai/llama-3.1-0x-mini-Q3KS-GGUF This model was converted to GGUF format from `ozone-ai/llama-3.1-0x-mini` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
6
1

luminary-9b-Q4_K_M-GGUF

NaNK
llama-cpp
5
0

2x-lite-Q4_K_M-GGUF

llama-cpp
3
1

2x-lite-Q2_K-GGUF

ozone-ai/2x-lite-Q2K-GGUF This model was converted to GGUF format from `ozone-ai/2x-lite` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
2
0

2x-lite

license:apache-2.0
1
3

Reverb-14b

NaNK
license:apache-2.0
1
3

asteroid-14b-v0.1

NaNK
license:apache-2.0
1
3

luminary-9b

NaNK
1
2

bfb-1

llama
1
1

llama-3.1-0x-mini-Q2_K-GGUF

ozone-ai/llama-3.1-0x-mini-Q2K-GGUF This model was converted to GGUF format from `ozone-ai/llama-3.1-0x-mini` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
1
0

sonata-55m

1
0

llama-3.1-0x-mini

NaNK
llama
0
5