BernTheCreator
Gemma 3 4b It Abliterated Q4 0 GGUF
"Combining the Abliterated Q40-GGUF with a better mmproj (vision) option (x-rayalpha), for a smoother experience." This model was converted to GGUF format from `mlabonne/gemma-3-4b-it-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Huihui-gemma-3n-E4B-it-abliterated-Q4_0-GGUF
BernTheCreator/Huihui-gemma-3n-E4B-it-abliterated-Q40-GGUF This model was converted to GGUF format from `huihui-ai/Huihui-gemma-3n-E4B-it-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Gemmasutra-9B-v1-Q4_0-GGUF
GodSlayer-12B-ABYSS-Q4_0-GGUF
Gemma-3n-E4B-it-Q4_0-GGUF
BernTheCreator/gemma-3n-E4B-it-Q40-GGUF This model was converted to GGUF format from `google/gemma-3n-E4B-it` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
DeepSeek-R1-Distill-Qwen-7B-Q4_0-GGUF
BernTheCreator/DeepSeek-R1-Distill-Qwen-7B-Q40-GGUF This model was converted to GGUF format from `deepseek-ai/DeepSeek-R1-Distill-Qwen-7B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
DeepSeek-R1-Distill-Llama-8B-abliterated-Q4_0-GGUF
Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_0-GGUF
Phi-4-mini-instruct-abliterated-Q4_0-GGUF
OpenCodeReasoning-Nemotron-7B-Q4_0-GGUF
Mistral-Nemo-12B-ArliAI-RPMax-v1.3-Q4_0-GGUF
Granite-3.1-8b-Instruct-Abliterated
Nemotron-Mini-4B-Instruct-Q4_0-GGUF
GLM-Z1-9B-0414-Q4_0-GGUF
DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1-Q4_0-GGUF
Granite-3.2-8b-Instruct-Q4_0-GGUF
EXAONE-3.5-7.8B-Instruct-Q4_0-GGUF
EXAONE-3.5-7.8B-Instruct-abliterated-Q4_0-GGUF
BernTheCreator/EXAONE-3.5-7.8B-Instruct-abliterated-Q40-GGUF This model was converted to GGUF format from `huihui-ai/EXAONE-3.5-7.8B-Instruct-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).