SaisExperiments
Voxtral-3B-But-4B-Text-Only-GGUF
arsenic-nemo-unleashed-12B-GGUF
GPT-NeoX-20B-Erebus-GGUF
ms-idk-v13-gguf
ToastyPigeon_gemma-3-27b-experiment-storyteller-GGUF
granite-4.0-tiny-preview-Q8_0-GGUF
SaisExperiments/granite-4.0-tiny-preview-Q80-GGUF This model was converted to GGUF format from `ibm-granite/granite-4.0-tiny-preview` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Qwen3-0.6B-F16-GGUF
g3-27b-beepo-mmtest-Q4_K_M-GGUF
Qwen3-0.6B-Q8-GGUF
MN-Prismatic-12b-Q6_K-GGUF
Not-So-Small-Alpaca-24B
Language model with Apache 2.0 license.
kanana-1.5-15.7b-a3b-instruct-Q8_0-GGUF
Evil-Alpaca-3B-L3.2
This model is based on the SaisExperiments/Big-Alpaca-Uncensored dataset and utilizes the transformers library.
Voxtral-3B-But-4B-Text-Only
QwOwO-7B-V1
This model is licensed under Apache 2.0 and is based on the dataset SaisExperiments/OwO-Data-Alpaca-10K.
Nemo-Unslop-2-IQ4_XS
MN-Prismatic-12b-Q5_K_M-GGUF
OwOllama-V1-RP-Exp-Q6_K-GGUF
Mistral-Small-Sisyphus-24b-2503-Q6_K-GGUF
Gemma-2-2B-Opus-Instruct
Gemma is designed for instruction-based tasks using the Opus dataset, which includes the kalomaze/Opus_Instruct_25k.
RightSheep-Llama3.2-3B
License: llama3.2 Base model: TroyDoesAI/BlackSheep-Llama3.2-3B
L3.1-8B-Pippa-OwO-RP-Q6_K-GGUF
Phi-4-Mini-OwOified
Gemma-2-2B-Stheno-Filtered
License: gemma. Datasets: anthracite-org/stheno-filtered-v1.1.
ProbSucks-4B-ARM-GGUF
Evil-Alpaca-Right-Lean-L3.2-3B-Q6_K-GGUF
SaisExperiments/Evil-Alpaca-Right-Lean-L3.2-3B-Q6K-GGUF This model was converted to GGUF format from `SaisExperiments/Evil-Alpaca-Right-Lean-L3.2-3B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Wock-n-wOwO-3B-A800M-Q4_K_M-GGUF
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-Q8_0-GGUF
ProbSucks-4B-Q8_0-GGUF
OwOllama-v0.1
GemmOwO-2B
ms-idk-v13
Evil-Alpaca-Right-Lean-L3.2-3B
Not-So-Small-Alpaca-24B-Q6_K-GGUF
SaisExperiments/Not-So-Small-Alpaca-24B-Q6K-GGUF This model was converted to GGUF format from `SaisExperiments/Not-So-Small-Alpaca-24B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Wock-n-wOwO-3B-A800M
QwOwO-7B-V1-Evalution
Pawdistic-Fur-Mittens-V0.1-24B-Q6_K-GGUF
Mistral-Small-24b-Sertraline-0304-Q6_K-GGUF
QwOwO-1.5B
Q25-1.5B-VeoLu-OwO-fied
OwOllama-V1.0-3B
L3.1-8B-Pippa-OwO-RP
This is a completely experimental model to see if it is possible to get a model to roleplay any character while maintaining uwu/owo speak :3