Ransss

101 models • 1 total models in database
Sort by:

L3-8B-Stheno-v3.2-Q8_0-GGUF

NaNK
llama-cpp
73
1

L3-Super-Nova-RP-8B-Q8_0-GGUF

NaNK
llama-cpp
71
1

WeirdCompound-v1.7-24b-Q8_0-GGUF

NaNK
llama-cpp
53
0

L3-Umbral-Mind-RP-v0.3-8B-Q8_0-GGUF

NaNK
llama-cpp
48
8

Qwen3-30B-A3B-ArliAI-RpR-v4-Fast-Q6_K-GGUF

NaNK
llama-cpp
40
0

Foredoomed-9B-Q8_0-GGUF

NaNK
llama-cpp
38
0

llama-3-Nephilim-v2-8B-Q8_0-GGUF

NaNK
llama-cpp
28
0

Magnolia-v3b-12B-Q8_0-GGUF

NaNK
llama-cpp
23
0

DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q8_0-GGUF

NaNK
llama3
22
2

MN-12B-Lyra-v4-Q8_0-GGUF

NaNK
llama-cpp
22
1

writing-roleplay-20k-context-nemo-12b-v1.0-Q8_0-GGUF

NaNK
llama-cpp
22
0

NS-12b-DarkSlushCap-Q8_0-GGUF

NaNK
llama-cpp
19
0

Hathor_Stable-v0.2-L3-8B-Q8_0-GGUF

NaNK
llama-cpp
18
0

Shadow-Crystal-12B-Q8_0-GGUF

Ransss/Shadow-Crystal-12B-Q80-GGUF This model was converted to GGUF format from `Vortex5/Shadow-Crystal-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
18
0

llama-3-Stheno-Mahou-8B-Q8_0-GGUF

NaNK
llama-cpp
17
2

Pantheon RP 1.0 8b Llama 3 Q8 0 GGUF

Ransss/Pantheon-RP-1.0-8b-Llama-3-Q80-GGUF This model was converted to GGUF format from `Gryphe/Pantheon-RP-1.0-8b-Llama-3` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

NaNK
Llama-3
17
2

L3-8B-Tamamo-v1-Q8_0-GGUF

NaNK
llama-cpp
17
1

Deepseek-R1-Distill-NSFW-RP-vRedux-Q8_0-GGUF

llama-cpp
16
0

DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small-Q8_0-GGUF

NaNK
llama-cpp
15
0

L3-SthenoMaidBlackroot-15B-Q6_K-GGUF

NaNK
llama-cpp
14
1

WeirdCompound-v1.5-24b-Q6_K-GGUF

Ransss/WeirdCompound-v1.5-24b-Q6K-GGUF This model was converted to GGUF format from `FlareRebellion/WeirdCompound-v1.5-24b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
14
0

SOVLish-Devil-8B-L3-Q8_0-GGUF

NaNK
llama-cpp
13
0

llama-3-Nephilim-v1-8B-Q8_0-GGUF

NaNK
llama-cpp
13
0

llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF

NaNK
llama3
12
3

NS-12b-DarkSluchCapV3-Q8_0-GGUF

Ransss/NS-12b-DarkSluchCapV3-Q80-GGUF This model was converted to GGUF format from `pot99rta/NS-12b-DarkSluchCapV3` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
11
0

LLama-3-8b-Lexi-Iranian-Stories-PersianTherapist-Lumimaid-DARE_TIES-Q8_0-GGUF

NaNK
llama-cpp
10
0

L3-TheSpice-8b-v0.8.3-Q8_0-GGUF

NaNK
llama-cpp
9
0

Pantheon-RP-1.5-12b-Nemo-Q8_0-GGUF

NaNK
llama-cpp
9
0

L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF

NaNK
llama-cpp
8
1

llama-3-Daredevil-Mahou-8B-Q8_0-GGUF

NaNK
llama-cpp
8
0

Halu-8B-Llama3-Blackroot-Q8_0-GGUF

NaNK
llama-cpp
8
0

Neural-SOVLish-Devil-8B-L3-Q8_0-GGUF

NaNK
llama-cpp
8
0

Xwin-MLewd-13B-V0.2-Q8_0-GGUF

NaNK
llama-cpp
8
0

Domain-Fusion-L3-8B-Q8_0-GGUF

NaNK
llama-cpp
8
0

L3-15B-MythicalMaid-t0.0001-Q8_0-GGUF

NaNK
llama
8
0

L3-15B-MythicalMaid-t0.0001-Q6_K-GGUF

NaNK
llama
8
0

L3-15B-EtherealMaid-t0.0001-Q6_K-GGUF

NaNK
llama
8
0

mini-magnum-12b-v1.1-Q8_0-GGUF

NaNK
llama-cpp
8
0

L3-TheGreenLion-8b-SFT-v0.1.2-Q8_0-GGUF

NaNK
llama-cpp
8
0

MN-12B-Lyra-v1-Q8_0-GGUF

NaNK
llama-cpp
8
0

YetAnotherMerge-v0.45-Q8_0-GGUF

llama-cpp
8
0

Lyralin-12B-v1-Q8_0-GGUF

NaNK
llama-cpp
8
0

Lyra_Gutenbergs-Twilight_Magnum-12B-Q8_0-GGUF

NaNK
llama-cpp
8
0

Ultracore-Instruct-12BV2-Q8_0-GGUF

Ransss/Ultracore-Instruct-12BV2-Q80-GGUF This model was converted to GGUF format from `pot99rta/Ultracore-Instruct-12BV2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
0

SOVL-Mega-Mash-V2-L3-8B-Q8_0-GGUF

NaNK
llama-cpp
7
0

TheSalt-RP-L3-8b-DPO-v0.3.2-e0.4.2-Q8_0-GGUF

NaNK
llama-cpp
7
0

L3-8B-Helium3-Q8_0-GGUF

NaNK
llama-cpp
7
0

Squelching-Fantasies-qw3-14B-Q8_0-GGUF

Ransss/Squelching-Fantasies-qw3-14B-Q80-GGUF This model was converted to GGUF format from `Mawdistical/Squelching-Fantasies-qw3-14B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
7
0

kukuspice-7B-Q8_0-GGUF

NaNK
llama-cpp
6
0

gemma2-9B-sunfall-v0.5.2-Q8_0-GGUF

NaNK
llama-cpp
6
0

Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.3-Quick-Q8_0-GGUF

Ransss/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.3-Quick-Q80-GGUF This model was converted to GGUF format from `cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.3-Quick` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
6
0

Captain-Eris_Twilight-Mistralified-12B-Q8_0-GGUF

Ransss/Captain-ErisTwilight-Mistralified-12B-Q80-GGUF This model was converted to GGUF format from `Nitral-AI/Captain-ErisTwilight-Mistralified-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
6
0

Mistral-7B-Erebus-v3-Q8_0-GGUF

NaNK
llama-cpp
5
1

Ninja-v1-RP-Q8_0-GGUF

llama-cpp
5
0

Xwin-MLewd-13B-V0.2-Q6_K-GGUF

NaNK
llama-cpp
5
0

DarkSapling-7B-v2.0-Q8_0-GGUF

NaNK
llama-cpp
5
0

MN-Ephemeros-12B-Q8_0-GGUF

NaNK
llama-cpp
5
0

Fimbulvetr-11B-v2-Q8_0-GGUF

NaNK
llama-cpp
4
1

OpenCrystal-L3-15B-v2.1-Q6_K-GGUF

NaNK
llama-cpp
4
1

L3-TheSpice-8b-v0.1.3-Q8_0-GGUF

NaNK
llama-cpp
4
0

Kuro-Lotus-10.7B-Q8_0-GGUF

NaNK
llama-cpp
4
0

L3.1-Ablaze-Vulca-v0.1-8B-Q8_0-GGUF

NaNK
llama-cpp
4
0

OpenCrystal-15B-L3-v2-Q6_K-GGUF

NaNK
llama-cpp
4
0

Stellar-Odyssey-12b-v0.0-Q8_0-GGUF

NaNK
llama-cpp
4
0

Mistral-Nemo-12B-ArliAI-RPMax-v1.3-Q8_0-GGUF

NaNK
llama-cpp
4
0

Forgotten-Safeword-12B-v4.0-Q8_0-GGUF

NaNK
llama-cpp
4
0

Fimbulvetr-10.7B-v1-Q8_0-GGUF

NaNK
llama-cpp
3
1

L3.1-8b-RP-Ink-Q8_0-GGUF

NaNK
llama-cpp
3
1

Captain_Eris_Noctis-12B-v0.420-Q8_0-GGUF

NaNK
llama-cpp
3
0

MN-Mystic-Rune-12B-Q8_0-GGUF

NaNK
llama-cpp
3
0

Moonlit-Shadow-12B-Q8_0-GGUF

Ransss/Moonlit-Shadow-12B-Q80-GGUF This model was converted to GGUF format from `Vortex5/Moonlit-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
0

Mystic-Rune-v2-12B-Q8_0-GGUF

NaNK
llama-cpp
3
0

Noromaid-13B-0.4-DPO-Q8_0-GGUF

NaNK
llama-cpp
2
1

Unholy-v1-12L-13B-Q6_K-GGUF

NaNK
llama-cpp
2
1

Emerald-13B-Q6_K-GGUF

NaNK
llama-cpp
2
1

PsyfighterTwo-ErebusThree-SlerpThree-Q8_0-GGUF

llama-cpp
2
1

MysticGem-v1.3-L2-13B-Q6_K-GGUF

NaNK
llama-cpp
2
1

Augmental-ReMM-13b-Merged-Q6_K-GGUF

NaNK
llama-cpp
2
1

Nemo-12b-Humanize-KTO-Experimental-Latest-Q8_0-GGUF

NaNK
llama-cpp
2
1

flammen24-mistral-7B-Q8_0-GGUF

NaNK
llama-cpp
2
0

flammen24X-mistral-7B-Q8_0-GGUF

NaNK
llama-cpp
2
0

Quantum-Citrus-9B-Q8_0-GGUF

NaNK
llama-cpp
2
0

Silver-Sun-11B-Q8_0-GGUF

NaNK
llama
2
0

Silver-Sun-v2-11B-Q8_0-GGUF

NaNK
llama
2
0

MLewdBoros-LRPSGPT-2Char-13B-Q6_K-GGUF

NaNK
llama-cpp
2
0

Amethyst-13B-Q6_K-GGUF

NaNK
llama-cpp
2
0

Eclectic-Maid-10B-v3-Q8_0-GGUF

NaNK
llama-cpp
2
0

Buttocks-7B-v1.1-Q8_0-GGUF

NaNK
llama-cpp
2
0

Echidna-13b-v0.3-Q6_K-GGUF

NaNK
llama-cpp
2
0

EstopianMaid-13B-Q6_K-GGUF

NaNK
llama-cpp
2
0

YiffyEstopianMaid-13B-Q6_K-GGUF

NaNK
llama-cpp
2
0

Wayfarer_Eris_Noctis-Mistralified-12B-Q8_0-GGUF

NaNK
llama-cpp
2
0

Forgotten-Safeword-24B-v4.0-Q6_K-GGUF

NaNK
llama-cpp
2
0

Nemo-12b-Humanize-KTO-Experimental-Latest-B-Q8_0-GGUF

NaNK
llama-cpp
1
1

MLewdBoros-L2-13B-Q6_K-GGUF

NaNK
llama-cpp
1
0

Utopia-13B-Q6_K-GGUF

NaNK
llama-cpp
1
0

Ice0.17-03.10-RP-Q8_0-GGUF

llama-cpp
1
0

Quiller-AntiSlop-12B-v2-Q8_0-GGUF

NaNK
llama-cpp
1
0

Captain-Eris-BMO_Violent-GRPO-v0.420-Q8_0-GGUF

NaNK
llama-cpp
1
0

Forgotten-Safeword-12B-3.6-Q8_0-GGUF

NaNK
llama-cpp
1
0

Forgotten-Safeword-12B-3.6-Q6_K-GGUF

NaNK
llama-cpp
1
0