DoppelReflEx
Qwen3-14B-Dawnwhisper
CirtusMandarin-14B-test2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1 nbeerbower/Vitus-Qwen3-14B The following YAML configuration was used to produce this model:
MiniusLight-24B-v1.01
CirtusMandarin-14B-test2-Q4_K_M-GGUF
DoppelReflEx/CirtusMandarin-14B-test2-Q4KM-GGUF This model was converted to GGUF format from `DoppelReflEx/CirtusMandarin-14B-test2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MiniusLight-24B-v3
@import url('https://fonts.googleapis.com/css2?family=Playwrite+CA+Guides&display=swap'); .playwrite-ca-guides-regular { font-family: "Playwrite CA Guides", cursive !important; font-weight: 400; font-style: normal; } body { margin:0; padding:0; font-size: 16px; } .main-container { background-color: #ebf3ff; border: 1px solid #466db9; border-radius: 8px; color: #050315; margin:16px; padding:16px; font-size: 16px; width: 95%; } h1, h2, h3 { color: #050315; margin-top: 16px; } .soft-blue-custom { color: #466db9 !important; } .alink { font-weight:400; text-decoration:none; } .main-banner-image { max-width:100%; max-height:600px; border-radius:8px; align-self:center; justify-self: center; border: 1px solid #466db9; margin: 8px 16px } pre.code-block, pre { font-size: clamp(10px, 1.3vw, 14px); white-space: pre; margin: 1em 0; background-color: #1a1a1a; padding: 1em; border-radius: 4px; color: #a9a6de; overflow-x: auto; } p { font-weight:500; } .pb { padding-bottom: 8px; } .mb { margin-bottom: 8px; } .bold { font-weight: 600; } .secondary { color: #a9a6de; } .accent { color: #403bb7; } .tac { text-align:center; } .border-custom-dot { border: 1px dashed #466db9; border-radius: 16px; padding:0 8px; } .border-custom { border: 1px solid #466db9; border-radius: 8px; padding:0 8px; } .as { padding-left: 16px; } .as2 { padding-left: 24px; } MiniusLight-24B-v3 12B - 24B-v1 - 24B-v1.01 - 24B-v2 - 24B-v2.1 - 24B-v3 Maybe this is last 24B Mistral model of this series. I'm tired (laugh). Thanks for two base models, this model archive very good styles and consistency in long context. 30th test btw, that mean there are 29 models fail to find and create this model. Best model of the series (for me). :) Chat Template? Mistral V7 - Tekken . ChatML are also good to use, but Mistral V7 - Tekken is recommend Merge Method { models: - model: TheDrummer/Cydonia-24B-v4.1 - model: Delta-Vector/Rei-24B-KTO mergemethod: slerp basemodel: TheDrummer/Cydonia-24B-v4.1 parameters: t: [0.1, 0.2, 0.3, 0.5, 0.8, 0.5, 0.3, 0.2, 0.1] dtype: bfloat16 tokenizersource: base }
LilithCore-v1-12B-Q4_K_S-GGUF
MN-12B-WolFrame-Q6_K-GGUF
DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-Q6K-GGUF This model was converted to GGUF format from `DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
DansPreConfig-Q4_K_S-GGUF
lilithcore-v0.1-test-Q6_K-GGUF
lilithcore-v0.1-test-unrescale-Q6_K-GGUF
moe-test-Q4_K_M-GGUF
DoppelReflEx/moe-test-Q4KM-GGUF This model was converted to GGUF format from `DoppelReflEx/moe-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
LilithCore-v1-12B
Next-gen version of Mimicore. Balance of roleplaying performance, intelligent and model size. I like this model, nearly 80% of my 24B MiniusLight v2.1. Template: Although Mistral Tekken one is smarter, I recommend to use ChatML format for roleplaying. If you don't really care about the model intelligent, ChatML is better in some cases with more creative. Mistral Tekken for smarter model, sometimes give it a try is not too bad.
moe-test-Q8_0-GGUF
DoppelReflEx/moe-test-Q80-GGUF This model was converted to GGUF format from `DoppelReflEx/moe-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
CirtusMandarin-14B-test-Q4_K_M-GGUF
DoppelReflEx/CirtusMandarin-14B-test-Q4KM-GGUF This model was converted to GGUF format from `DoppelReflEx/CirtusMandarin-14B-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
QWQ-32B-Dawnwhisper-QWQTokenizer
MiniusLight-24B-v3-test-Q4_K_S-GGUF
DoppelReflEx/MiniusLight-24B-v3-test-Q4KS-GGUF This model was converted to GGUF format from `DoppelReflEx/MiniusLight-24B-v3-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MN-12B-WolFrame
License: CC BY-NC-4.0. Base model: crestf411/MN-Slush.
MiniusLight-24B-v2b-test-Q4_K_S-GGUF
L3-8B-WolfCore
Base model: NeverSleep Lumimaid v0.2 8B, cgato L3 TheSpice 8B v0.8.3.
MN-12B-Mimicore-Orochi-Q6_K-GGUF
DoppelReflEx/MN-12B-Mimicore-Orochi-Q6K-GGUF This model was converted to GGUF format from `DoppelReflEx/MN-12B-Mimicore-Orochi` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MN-12B-FoxFrame-Yukina
MN-12B-Unleashed-Twilight
Base model: Marinara Spaghetti Nemo Mix Unleashed 12B, Epiculous Violet Twilight v0.2.
MN-12B-WolFrame-Ver.B
Defective version of WolFrame, it's confuse between {{user}} and {{char}} sometimes, cause me a lots of trouble. Why this model have a far better eval scores than original WolFrame??? GGUF? https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-4-GGUF was previous name of this model The following models were included in the merge: crestf411/MN-Slush DoppelReflEx/MN-12B-Mimicore-WhiteSnake The following YAML configuration was used to produce this model:
MiniusLight-24B-v2
LilithCore-v0.9-12B
L3-3B-BlackSheep-Gutenberg-Experiment-test
MiniusLight-24B-v2.1-Q4_K_S-GGUF
DansPreConfig-24B
MiniusLight-24B-v3-test
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: /content/BlackSheep-24B /content/MS3.2-Austral-Winton The following YAML configuration was used to produce this model:
MoETest-3E2A-3x3B
QWQ-32B-Dawnwhisper
MN-12B-Mimicore-Nocturne
Base model: DoppelReflEx MN-12B Mimicore WhiteSnake, LatitudeGames Wayfarer 12B.
MiniusLight-24B-v2.1
@import url('https://fonts.googleapis.com/css2?family=Playwrite+CA+Guides&display=swap'); .playwrite-ca-guides-regular { font-family: "Playwrite CA Guides", cursive !important; font-weight: 400; font-style: normal; } body { margin:0; padding:0; font-size: 16px; } .main-container { background-color: #ebf3ff; border: 1px solid #466db9; border-radius: 8px; color: #050315; margin:16px; padding:16px; font-size: 16px; width: 95%; } h1, h2, h3 { color: #050315; margin-top: 16px; } .soft-blue-custom { color: #466db9 !important; } .alink { font-weight:400; text-decoration:none; } .main-banner-image { max-width:100%; max-height:600px; border-radius:8px; align-self:center; justify-self: center; border: 1px solid #466db9; margin: 8px 16px } pre.code-block, pre { font-size: clamp(10px, 1.3vw, 14px); white-space: pre; margin: 1em 0; background-color: #1a1a1a; padding: 1em; border-radius: 4px; color: #a9a6de; overflow-x: auto; } p { font-weight:500; } .pb { padding-bottom: 8px; } .mb { margin-bottom: 8px; } .bold { font-weight: 600; } .secondary { color: #a9a6de; } .accent { color: #403bb7; } .tac { text-align:center; } .border-custom-dot { border: 1px dashed #466db9; border-radius: 16px; padding:0 8px; } .border-custom { border: 1px solid #466db9; border-radius: 8px; padding:0 8px; } .as { padding-left: 16px; } .as2 { padding-left: 24px; } MiniusLight-24B-v2.1 12B - 24B-v1 - 24B-v1.01 - 24B-v2 - 24B-v2.1 A merge of most uncensored model TroyDoesAI/BlackSheep-24B and recipe of MiniusLight-24B: TheDrummer/Cydonia-24B-v2 and PocketDoc/Dans-PersonalityEngine-V1.2.0-24b. Another version of v2, but far better than it. Vivid writing styles, and talk back to me, sometimes hard to control it. (Maybe just because my character card) Best model of the series (for me). :) PS: Highest NatInt for 24B model in UGI leaderboard (1st May 2025) GGUF (Thank mradermacher and his team so much (nicoboss too)) Static - iMatrix Chat Template? ChatML , of course! Mistral V7 if you want the model smarter. Merge Method { models: - model: TroyDoesAI/BlackSheep-24B parameters: density: 0.9 weight: 1 - model: TheDrummer/Cydonia-24B-v2 parameters: density: 0.6 weight: 0.8 - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b parameters: density: 0.8 weight: 0.6 mergemethod: dareties basemodel: TroyDoesAI/BlackSheep-24B tokenizersource: base parameters: rescale: true dtype: bfloat16 }
MN-12B-Mimicore-GreenSnake
This model is based on PocketDoc's Dans Personality Engine V1.1.0 and is licensed under CC BY-NC 4.0.
MiniusLight-24B
Base model includes The Drummer Cydonia 24B v2 and PocketDoc Dans Personality Engine V1.2.0 24B.
MN-12B-Kakigori
License: cc-by-nc-4.0. Base model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest.
MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF
MN-12B-FoxFrame-Miyuri
L3-8B-R1-WolfCore
Base model: TheDrummer Llama 3SOME 8B v2, cgato L3 TheSpice 8B v0.8.3.
MN-12B-Mimicore-Orochi
License: CC BY-NC 4.0. Base model: DoppelReflEx/MN-12B-Mimicore-GreenSnake.
MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF
Mimicore-GreenSnake-22B
Mimicore-WhiteSnake-22B
L3-8B-R1-WolfCore-V1.5-test
Base model: Sao10K L3 8B Lunaris v1, SicariusSicariiStuff Wingless Imp 8B.
L3-3B-BlackSheep-Gutenberg-Q4_K_M-GGUF
QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF
MN-12B-Mimicore-WhiteSnake
License: cc-by-nc-4.0. Base model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest.
MN-12B-FoxFrame-Shinori
MN-12B-Kakigori-Q6_K-GGUF
DoppelReflEx/MN-12B-Kakigori-Q6K-GGUF This model was converted to GGUF format from `DoppelReflEx/MN-12B-Kakigori` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).