Ransss
L3-8B-Stheno-v3.2-Q8_0-GGUF
L3-Super-Nova-RP-8B-Q8_0-GGUF
WeirdCompound-v1.7-24b-Q8_0-GGUF
L3-Umbral-Mind-RP-v0.3-8B-Q8_0-GGUF
Qwen3-30B-A3B-ArliAI-RpR-v4-Fast-Q6_K-GGUF
Foredoomed-9B-Q8_0-GGUF
llama-3-Nephilim-v2-8B-Q8_0-GGUF
Magnolia-v3b-12B-Q8_0-GGUF
DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q8_0-GGUF
MN-12B-Lyra-v4-Q8_0-GGUF
writing-roleplay-20k-context-nemo-12b-v1.0-Q8_0-GGUF
NS-12b-DarkSlushCap-Q8_0-GGUF
Hathor_Stable-v0.2-L3-8B-Q8_0-GGUF
Shadow-Crystal-12B-Q8_0-GGUF
Ransss/Shadow-Crystal-12B-Q80-GGUF This model was converted to GGUF format from `Vortex5/Shadow-Crystal-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
llama-3-Stheno-Mahou-8B-Q8_0-GGUF
Pantheon RP 1.0 8b Llama 3 Q8 0 GGUF
Ransss/Pantheon-RP-1.0-8b-Llama-3-Q80-GGUF This model was converted to GGUF format from `Gryphe/Pantheon-RP-1.0-8b-Llama-3` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
L3-8B-Tamamo-v1-Q8_0-GGUF
Deepseek-R1-Distill-NSFW-RP-vRedux-Q8_0-GGUF
DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small-Q8_0-GGUF
L3-SthenoMaidBlackroot-15B-Q6_K-GGUF
WeirdCompound-v1.5-24b-Q6_K-GGUF
Ransss/WeirdCompound-v1.5-24b-Q6K-GGUF This model was converted to GGUF format from `FlareRebellion/WeirdCompound-v1.5-24b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
SOVLish-Devil-8B-L3-Q8_0-GGUF
llama-3-Nephilim-v1-8B-Q8_0-GGUF
llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF
NS-12b-DarkSluchCapV3-Q8_0-GGUF
Ransss/NS-12b-DarkSluchCapV3-Q80-GGUF This model was converted to GGUF format from `pot99rta/NS-12b-DarkSluchCapV3` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
LLama-3-8b-Lexi-Iranian-Stories-PersianTherapist-Lumimaid-DARE_TIES-Q8_0-GGUF
L3-TheSpice-8b-v0.8.3-Q8_0-GGUF
Pantheon-RP-1.5-12b-Nemo-Q8_0-GGUF
L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF
llama-3-Daredevil-Mahou-8B-Q8_0-GGUF
Halu-8B-Llama3-Blackroot-Q8_0-GGUF
Neural-SOVLish-Devil-8B-L3-Q8_0-GGUF
Xwin-MLewd-13B-V0.2-Q8_0-GGUF
Domain-Fusion-L3-8B-Q8_0-GGUF
L3-15B-MythicalMaid-t0.0001-Q8_0-GGUF
L3-15B-MythicalMaid-t0.0001-Q6_K-GGUF
L3-15B-EtherealMaid-t0.0001-Q6_K-GGUF
mini-magnum-12b-v1.1-Q8_0-GGUF
L3-TheGreenLion-8b-SFT-v0.1.2-Q8_0-GGUF
MN-12B-Lyra-v1-Q8_0-GGUF
YetAnotherMerge-v0.45-Q8_0-GGUF
Lyralin-12B-v1-Q8_0-GGUF
Lyra_Gutenbergs-Twilight_Magnum-12B-Q8_0-GGUF
Ultracore-Instruct-12BV2-Q8_0-GGUF
Ransss/Ultracore-Instruct-12BV2-Q80-GGUF This model was converted to GGUF format from `pot99rta/Ultracore-Instruct-12BV2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
SOVL-Mega-Mash-V2-L3-8B-Q8_0-GGUF
TheSalt-RP-L3-8b-DPO-v0.3.2-e0.4.2-Q8_0-GGUF
L3-8B-Helium3-Q8_0-GGUF
Squelching-Fantasies-qw3-14B-Q8_0-GGUF
Ransss/Squelching-Fantasies-qw3-14B-Q80-GGUF This model was converted to GGUF format from `Mawdistical/Squelching-Fantasies-qw3-14B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
kukuspice-7B-Q8_0-GGUF
gemma2-9B-sunfall-v0.5.2-Q8_0-GGUF
Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.3-Quick-Q8_0-GGUF
Ransss/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.3-Quick-Q80-GGUF This model was converted to GGUF format from `cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.3-Quick` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Captain-Eris_Twilight-Mistralified-12B-Q8_0-GGUF
Ransss/Captain-ErisTwilight-Mistralified-12B-Q80-GGUF This model was converted to GGUF format from `Nitral-AI/Captain-ErisTwilight-Mistralified-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Mistral-7B-Erebus-v3-Q8_0-GGUF
Ninja-v1-RP-Q8_0-GGUF
Xwin-MLewd-13B-V0.2-Q6_K-GGUF
DarkSapling-7B-v2.0-Q8_0-GGUF
MN-Ephemeros-12B-Q8_0-GGUF
Fimbulvetr-11B-v2-Q8_0-GGUF
OpenCrystal-L3-15B-v2.1-Q6_K-GGUF
L3-TheSpice-8b-v0.1.3-Q8_0-GGUF
Kuro-Lotus-10.7B-Q8_0-GGUF
L3.1-Ablaze-Vulca-v0.1-8B-Q8_0-GGUF
OpenCrystal-15B-L3-v2-Q6_K-GGUF
Stellar-Odyssey-12b-v0.0-Q8_0-GGUF
Mistral-Nemo-12B-ArliAI-RPMax-v1.3-Q8_0-GGUF
Forgotten-Safeword-12B-v4.0-Q8_0-GGUF
Fimbulvetr-10.7B-v1-Q8_0-GGUF
L3.1-8b-RP-Ink-Q8_0-GGUF
Captain_Eris_Noctis-12B-v0.420-Q8_0-GGUF
MN-Mystic-Rune-12B-Q8_0-GGUF
Moonlit-Shadow-12B-Q8_0-GGUF
Ransss/Moonlit-Shadow-12B-Q80-GGUF This model was converted to GGUF format from `Vortex5/Moonlit-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).