BernTheCreator

21 models • 1 total models in database
Sort by:

Gemma 3 4b It Abliterated Q4 0 GGUF

"Combining the Abliterated Q40-GGUF with a better mmproj (vision) option (x-rayalpha), for a smoother experience." This model was converted to GGUF format from `mlabonne/gemma-3-4b-it-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
327
3

Huihui-gemma-3n-E4B-it-abliterated-Q4_0-GGUF

BernTheCreator/Huihui-gemma-3n-E4B-it-abliterated-Q40-GGUF This model was converted to GGUF format from `huihui-ai/Huihui-gemma-3n-E4B-it-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
122
2

Gemmasutra-9B-v1-Q4_0-GGUF

NaNK
llama-cpp
34
1

GodSlayer-12B-ABYSS-Q4_0-GGUF

NaNK
llama-cpp
26
2

Gemma-3n-E4B-it-Q4_0-GGUF

BernTheCreator/gemma-3n-E4B-it-Q40-GGUF This model was converted to GGUF format from `google/gemma-3n-E4B-it` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
25
1

DeepSeek-R1-Distill-Qwen-7B-Q4_0-GGUF

BernTheCreator/DeepSeek-R1-Distill-Qwen-7B-Q40-GGUF This model was converted to GGUF format from `deepseek-ai/DeepSeek-R1-Distill-Qwen-7B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
16
1

DeepSeek-R1-Distill-Llama-8B-abliterated-Q4_0-GGUF

NaNK
llama-cpp
10
1

Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_0-GGUF

NaNK
llama-cpp
10
0

Phi-4-mini-instruct-abliterated-Q4_0-GGUF

llama-cpp
10
0

OpenCodeReasoning-Nemotron-7B-Q4_0-GGUF

NaNK
llama-cpp
9
0

Mistral-Nemo-12B-ArliAI-RPMax-v1.3-Q4_0-GGUF

NaNK
llama-cpp
8
2

Granite-3.1-8b-Instruct-Abliterated

NaNK
llama-cpp
6
0

Nemotron-Mini-4B-Instruct-Q4_0-GGUF

NaNK
llama-3
6
0

GLM-Z1-9B-0414-Q4_0-GGUF

NaNK
llama-cpp
6
0

DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1-Q4_0-GGUF

NaNK
llama-cpp
5
1

Granite-3.2-8b-Instruct-Q4_0-GGUF

NaNK
llama-cpp
5
0

EXAONE-3.5-7.8B-Instruct-Q4_0-GGUF

NaNK
llama-cpp
2
0

EXAONE-3.5-7.8B-Instruct-abliterated-Q4_0-GGUF

BernTheCreator/EXAONE-3.5-7.8B-Instruct-abliterated-Q40-GGUF This model was converted to GGUF format from `huihui-ai/EXAONE-3.5-7.8B-Instruct-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

EZO-Common-9B-gemma-2-it-Q4_0-GGUF

NaNK
llama-cpp
1
0

Ava-1.5-12B-Q4_0-GGUF

NaNK
llama-cpp
0
1

DeepSeek-R1-Distill-Qwen-7B-abliterated-v2-Q4_0-GGUF

NaNK
llama-cpp
0
1