DoppelReflEx

49 models • 11 total models in database
Sort by:

Qwen3-14B-Dawnwhisper

NaNK
license:cc-by-nc-4.0
17
4

CirtusMandarin-14B-test2

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1 nbeerbower/Vitus-Qwen3-14B The following YAML configuration was used to produce this model:

NaNK
15
0

MiniusLight-24B-v1.01

NaNK
license:cc-by-nc-4.0
14
4

CirtusMandarin-14B-test2-Q4_K_M-GGUF

DoppelReflEx/CirtusMandarin-14B-test2-Q4KM-GGUF This model was converted to GGUF format from `DoppelReflEx/CirtusMandarin-14B-test2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
14
0

MiniusLight-24B-v3

@import url('https://fonts.googleapis.com/css2?family=Playwrite+CA+Guides&display=swap'); .playwrite-ca-guides-regular { font-family: "Playwrite CA Guides", cursive !important; font-weight: 400; font-style: normal; } body { margin:0; padding:0; font-size: 16px; } .main-container { background-color: #ebf3ff; border: 1px solid #466db9; border-radius: 8px; color: #050315; margin:16px; padding:16px; font-size: 16px; width: 95%; } h1, h2, h3 { color: #050315; margin-top: 16px; } .soft-blue-custom { color: #466db9 !important; } .alink { font-weight:400; text-decoration:none; } .main-banner-image { max-width:100%; max-height:600px; border-radius:8px; align-self:center; justify-self: center; border: 1px solid #466db9; margin: 8px 16px } pre.code-block, pre { font-size: clamp(10px, 1.3vw, 14px); white-space: pre; margin: 1em 0; background-color: #1a1a1a; padding: 1em; border-radius: 4px; color: #a9a6de; overflow-x: auto; } p { font-weight:500; } .pb { padding-bottom: 8px; } .mb { margin-bottom: 8px; } .bold { font-weight: 600; } .secondary { color: #a9a6de; } .accent { color: #403bb7; } .tac { text-align:center; } .border-custom-dot { border: 1px dashed #466db9; border-radius: 16px; padding:0 8px; } .border-custom { border: 1px solid #466db9; border-radius: 8px; padding:0 8px; } .as { padding-left: 16px; } .as2 { padding-left: 24px; } MiniusLight-24B-v3 12B - 24B-v1 - 24B-v1.01 - 24B-v2 - 24B-v2.1 - 24B-v3 Maybe this is last 24B Mistral model of this series. I'm tired (laugh). Thanks for two base models, this model archive very good styles and consistency in long context. 30th test btw, that mean there are 29 models fail to find and create this model. Best model of the series (for me). :) Chat Template? Mistral V7 - Tekken . ChatML are also good to use, but Mistral V7 - Tekken is recommend Merge Method { models: - model: TheDrummer/Cydonia-24B-v4.1 - model: Delta-Vector/Rei-24B-KTO mergemethod: slerp basemodel: TheDrummer/Cydonia-24B-v4.1 parameters: t: [0.1, 0.2, 0.3, 0.5, 0.8, 0.5, 0.3, 0.2, 0.1] dtype: bfloat16 tokenizersource: base }

NaNK
11
4

LilithCore-v1-12B-Q4_K_S-GGUF

NaNK
llama-cpp
10
0

MN-12B-WolFrame-Q6_K-GGUF

DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-Q6K-GGUF This model was converted to GGUF format from `DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
9
0

DansPreConfig-Q4_K_S-GGUF

NaNK
llama-cpp
9
0

lilithcore-v0.1-test-Q6_K-GGUF

NaNK
llama-cpp
9
0

lilithcore-v0.1-test-unrescale-Q6_K-GGUF

llama-cpp
9
0

moe-test-Q4_K_M-GGUF

DoppelReflEx/moe-test-Q4KM-GGUF This model was converted to GGUF format from `DoppelReflEx/moe-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
9
0

LilithCore-v1-12B

Next-gen version of Mimicore. Balance of roleplaying performance, intelligent and model size. I like this model, nearly 80% of my 24B MiniusLight v2.1. Template: Although Mistral Tekken one is smarter, I recommend to use ChatML format for roleplaying. If you don't really care about the model intelligent, ChatML is better in some cases with more creative. Mistral Tekken for smarter model, sometimes give it a try is not too bad.

NaNK
license:cc-by-nc-4.0
8
5

moe-test-Q8_0-GGUF

DoppelReflEx/moe-test-Q80-GGUF This model was converted to GGUF format from `DoppelReflEx/moe-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
8
0

CirtusMandarin-14B-test-Q4_K_M-GGUF

DoppelReflEx/CirtusMandarin-14B-test-Q4KM-GGUF This model was converted to GGUF format from `DoppelReflEx/CirtusMandarin-14B-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
0

QWQ-32B-Dawnwhisper-QWQTokenizer

NaNK
7
10

MiniusLight-24B-v3-test-Q4_K_S-GGUF

DoppelReflEx/MiniusLight-24B-v3-test-Q4KS-GGUF This model was converted to GGUF format from `DoppelReflEx/MiniusLight-24B-v3-test` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
7
0

MN-12B-WolFrame

License: CC BY-NC-4.0. Base model: crestf411/MN-Slush.

NaNK
license:cc-by-nc-4.0
6
6

MiniusLight-24B-v2b-test-Q4_K_S-GGUF

NaNK
llama-cpp
5
0

L3-8B-WolfCore

Base model: NeverSleep Lumimaid v0.2 8B, cgato L3 TheSpice 8B v0.8.3.

NaNK
llama
4
1

MN-12B-Mimicore-Orochi-Q6_K-GGUF

DoppelReflEx/MN-12B-Mimicore-Orochi-Q6K-GGUF This model was converted to GGUF format from `DoppelReflEx/MN-12B-Mimicore-Orochi` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

MN-12B-FoxFrame-Yukina

NaNK
license:cc-by-nc-4.0
3
2

MN-12B-Unleashed-Twilight

Base model: Marinara Spaghetti Nemo Mix Unleashed 12B, Epiculous Violet Twilight v0.2.

NaNK
3
2

MN-12B-WolFrame-Ver.B

Defective version of WolFrame, it's confuse between {{user}} and {{char}} sometimes, cause me a lots of trouble. Why this model have a far better eval scores than original WolFrame??? GGUF? https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-4-GGUF was previous name of this model The following models were included in the merge: crestf411/MN-Slush DoppelReflEx/MN-12B-Mimicore-WhiteSnake The following YAML configuration was used to produce this model:

NaNK
license:cc-by-nc-4.0
3
1

MiniusLight-24B-v2

NaNK
license:cc-by-nc-4.0
3
1

LilithCore-v0.9-12B

NaNK
3
1

L3-3B-BlackSheep-Gutenberg-Experiment-test

NaNK
llama
3
0

MiniusLight-24B-v2.1-Q4_K_S-GGUF

NaNK
llama-cpp
3
0

DansPreConfig-24B

NaNK
license:apache-2.0
3
0

MiniusLight-24B-v3-test

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: /content/BlackSheep-24B /content/MS3.2-Austral-Winton The following YAML configuration was used to produce this model:

NaNK
3
0

MoETest-3E2A-3x3B

NaNK
base_model:NousResearch/Hermes-3-Llama-3.2-3B
3
0

QWQ-32B-Dawnwhisper

NaNK
2
5

MN-12B-Mimicore-Nocturne

Base model: DoppelReflEx MN-12B Mimicore WhiteSnake, LatitudeGames Wayfarer 12B.

NaNK
license:cc-by-nc-4.0
2
4

MiniusLight-24B-v2.1

@import url('https://fonts.googleapis.com/css2?family=Playwrite+CA+Guides&display=swap'); .playwrite-ca-guides-regular { font-family: "Playwrite CA Guides", cursive !important; font-weight: 400; font-style: normal; } body { margin:0; padding:0; font-size: 16px; } .main-container { background-color: #ebf3ff; border: 1px solid #466db9; border-radius: 8px; color: #050315; margin:16px; padding:16px; font-size: 16px; width: 95%; } h1, h2, h3 { color: #050315; margin-top: 16px; } .soft-blue-custom { color: #466db9 !important; } .alink { font-weight:400; text-decoration:none; } .main-banner-image { max-width:100%; max-height:600px; border-radius:8px; align-self:center; justify-self: center; border: 1px solid #466db9; margin: 8px 16px } pre.code-block, pre { font-size: clamp(10px, 1.3vw, 14px); white-space: pre; margin: 1em 0; background-color: #1a1a1a; padding: 1em; border-radius: 4px; color: #a9a6de; overflow-x: auto; } p { font-weight:500; } .pb { padding-bottom: 8px; } .mb { margin-bottom: 8px; } .bold { font-weight: 600; } .secondary { color: #a9a6de; } .accent { color: #403bb7; } .tac { text-align:center; } .border-custom-dot { border: 1px dashed #466db9; border-radius: 16px; padding:0 8px; } .border-custom { border: 1px solid #466db9; border-radius: 8px; padding:0 8px; } .as { padding-left: 16px; } .as2 { padding-left: 24px; } MiniusLight-24B-v2.1 12B - 24B-v1 - 24B-v1.01 - 24B-v2 - 24B-v2.1 A merge of most uncensored model TroyDoesAI/BlackSheep-24B and recipe of MiniusLight-24B: TheDrummer/Cydonia-24B-v2 and PocketDoc/Dans-PersonalityEngine-V1.2.0-24b. Another version of v2, but far better than it. Vivid writing styles, and talk back to me, sometimes hard to control it. (Maybe just because my character card) Best model of the series (for me). :) PS: Highest NatInt for 24B model in UGI leaderboard (1st May 2025) GGUF (Thank mradermacher and his team so much (nicoboss too)) Static - iMatrix Chat Template? ChatML , of course! Mistral V7 if you want the model smarter. Merge Method { models: - model: TroyDoesAI/BlackSheep-24B parameters: density: 0.9 weight: 1 - model: TheDrummer/Cydonia-24B-v2 parameters: density: 0.6 weight: 0.8 - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b parameters: density: 0.8 weight: 0.6 mergemethod: dareties basemodel: TroyDoesAI/BlackSheep-24B tokenizersource: base parameters: rescale: true dtype: bfloat16 }

NaNK
2
4

MN-12B-Mimicore-GreenSnake

This model is based on PocketDoc's Dans Personality Engine V1.1.0 and is licensed under CC BY-NC 4.0.

NaNK
license:cc-by-nc-4.0
2
3

MiniusLight-24B

Base model includes The Drummer Cydonia 24B v2 and PocketDoc Dans Personality Engine V1.2.0 24B.

NaNK
license:cc-by-nc-4.0
2
3

MN-12B-Kakigori

License: cc-by-nc-4.0. Base model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest.

NaNK
license:cc-by-nc-4.0
2
2

MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF

NaNK
llama-cpp
2
0

MN-12B-FoxFrame-Miyuri

NaNK
license:cc-by-nc-4.0
1
4

L3-8B-R1-WolfCore

Base model: TheDrummer Llama 3SOME 8B v2, cgato L3 TheSpice 8B v0.8.3.

NaNK
llama
1
3

MN-12B-Mimicore-Orochi

License: CC BY-NC 4.0. Base model: DoppelReflEx/MN-12B-Mimicore-GreenSnake.

NaNK
license:cc-by-nc-4.0
1
2

MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

Mimicore-GreenSnake-22B

NaNK
license:cc-by-nc-4.0
1
0

Mimicore-WhiteSnake-22B

NaNK
license:cc-by-nc-4.0
1
0

L3-8B-R1-WolfCore-V1.5-test

Base model: Sao10K L3 8B Lunaris v1, SicariusSicariiStuff Wingless Imp 8B.

NaNK
llama
1
0

L3-3B-BlackSheep-Gutenberg-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF

NaNK
llama-cpp
1
0

MN-12B-Mimicore-WhiteSnake

License: cc-by-nc-4.0. Base model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest.

NaNK
license:cc-by-nc-4.0
0
4

MN-12B-FoxFrame-Shinori

NaNK
license:cc-by-nc-4.0
0
2

MN-12B-Kakigori-Q6_K-GGUF

DoppelReflEx/MN-12B-Kakigori-Q6K-GGUF This model was converted to GGUF format from `DoppelReflEx/MN-12B-Kakigori` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1