SaisExperiments

42 models • 6 total models in database
Sort by:

Voxtral-3B-But-4B-Text-Only-GGUF

NaNK
license:apache-2.0
298
7

arsenic-nemo-unleashed-12B-GGUF

NaNK
license:cc-by-nc-4.0
47
2

GPT-NeoX-20B-Erebus-GGUF

NaNK
license:apache-2.0
29
0

ms-idk-v13-gguf

24
0

ToastyPigeon_gemma-3-27b-experiment-storyteller-GGUF

NaNK
17
1

granite-4.0-tiny-preview-Q8_0-GGUF

SaisExperiments/granite-4.0-tiny-preview-Q80-GGUF This model was converted to GGUF format from `ibm-granite/granite-4.0-tiny-preview` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
13
1

Qwen3-0.6B-F16-GGUF

NaNK
9
1

g3-27b-beepo-mmtest-Q4_K_M-GGUF

NaNK
llama-cpp
7
0

Qwen3-0.6B-Q8-GGUF

NaNK
7
0

MN-Prismatic-12b-Q6_K-GGUF

NaNK
llama-cpp
6
0

Not-So-Small-Alpaca-24B

Language model with Apache 2.0 license.

NaNK
license:apache-2.0
5
0

kanana-1.5-15.7b-a3b-instruct-Q8_0-GGUF

NaNK
llama-cpp
5
0

Evil-Alpaca-3B-L3.2

This model is based on the SaisExperiments/Big-Alpaca-Uncensored dataset and utilizes the transformers library.

NaNK
llama
4
10

Voxtral-3B-But-4B-Text-Only

NaNK
license:apache-2.0
4
2

QwOwO-7B-V1

This model is licensed under Apache 2.0 and is based on the dataset SaisExperiments/OwO-Data-Alpaca-10K.

NaNK
license:apache-2.0
4
1

Nemo-Unslop-2-IQ4_XS

4
0

MN-Prismatic-12b-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

OwOllama-V1-RP-Exp-Q6_K-GGUF

llama-cpp
4
0

Mistral-Small-Sisyphus-24b-2503-Q6_K-GGUF

NaNK
llama-cpp
4
0

Gemma-2-2B-Opus-Instruct

Gemma is designed for instruction-based tasks using the Opus dataset, which includes the kalomaze/Opus_Instruct_25k.

NaNK
3
2

RightSheep-Llama3.2-3B

License: llama3.2 Base model: TroyDoesAI/BlackSheep-Llama3.2-3B

NaNK
llama
3
1

L3.1-8B-Pippa-OwO-RP-Q6_K-GGUF

NaNK
llama-cpp
3
0

Phi-4-Mini-OwOified

license:mit
2
3

Gemma-2-2B-Stheno-Filtered

License: gemma. Datasets: anthracite-org/stheno-filtered-v1.1.

NaNK
2
1

ProbSucks-4B-ARM-GGUF

NaNK
license:apache-2.0
2
0

Evil-Alpaca-Right-Lean-L3.2-3B-Q6_K-GGUF

SaisExperiments/Evil-Alpaca-Right-Lean-L3.2-3B-Q6K-GGUF This model was converted to GGUF format from `SaisExperiments/Evil-Alpaca-Right-Lean-L3.2-3B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Wock-n-wOwO-3B-A800M-Q4_K_M-GGUF

NaNK
llama-cpp
2
0

Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-Q8_0-GGUF

NaNK
llama-cpp
2
0

ProbSucks-4B-Q8_0-GGUF

NaNK
llama-cpp
1
1

OwOllama-v0.1

NaNK
llama
1
1

GemmOwO-2B

NaNK
1
1

ms-idk-v13

1
0

Evil-Alpaca-Right-Lean-L3.2-3B

NaNK
llama
1
0

Not-So-Small-Alpaca-24B-Q6_K-GGUF

SaisExperiments/Not-So-Small-Alpaca-24B-Q6K-GGUF This model was converted to GGUF format from `SaisExperiments/Not-So-Small-Alpaca-24B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

Wock-n-wOwO-3B-A800M

NaNK
license:apache-2.0
1
0

QwOwO-7B-V1-Evalution

NaNK
license:apache-2.0
1
0

Pawdistic-Fur-Mittens-V0.1-24B-Q6_K-GGUF

NaNK
llama-cpp
1
0

Mistral-Small-24b-Sertraline-0304-Q6_K-GGUF

NaNK
llama-cpp
1
0

QwOwO-1.5B

NaNK
license:apache-2.0
0
1

Q25-1.5B-VeoLu-OwO-fied

NaNK
license:apache-2.0
0
1

OwOllama-V1.0-3B

NaNK
llama
0
1

L3.1-8B-Pippa-OwO-RP

This is a completely experimental model to see if it is possible to get a model to roleplay any character while maintaining uwu/owo speak :3

NaNK
llama
0
1