Vortex5

113 models β€’ 20 total models in database
Sort by:

LunaMaid 12B

This is a multi-stage merge of pre-trained language models created using mergekit. LunaMaid-12B was produced through a two-stage multi-model merge using MergeKit. Each stage fuses models with complementary linguistic and stylistic traits to create a cohesive, emotionally nuanced personality. 🩡 Stage 1 β€” Slerp Merge (Intermediate Model `First`)

NaNK
β€”
283
5

Abyssal Seraph 12B

> Where the light of the divine meets the poetry of the abyss. Abyssal-Seraph-12B is a multi-stage creative merge designed for expressive storytelling, emotional depth, and lyrical dialogue. It was crafted through a layered fusion using MergeKit: 1. πŸŒ™ LunaMaid Γ— Vermilion-Sage β€” merged via NearSwap (`t=0.0008`) to unify LunaMaid’s balanced composure with Vermilion-Sage’s radiant prose. 2. πŸ•―οΈ Dark-Quill Γ— Mag-Mell-R1 β€” merged via NearSwap (`t=0.0008`) to draw forth mysticism, poetic darkness...

NaNK
β€”
242
7

MN-14B-Crimson-Veil

This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: SicariusSicariiStuff/ImpishNemo12B anthracite-org/magnum-v4-12b Vortex5/Moonlit-Shadow-12B crestf411/MN-Slush The following YAML configuration was used to produce this model:

NaNK
β€”
209
3

Luminous Shadow 12B

β€œWithin the deepest shadow, the brightest light awaits.” Luminous-Shadow-12B was merged using the DELLA merge method via MergeKit , balancing ethereal creativity and reasoned coherence. It draws from the expressive nature of Shadow-Crystal , the refined structure of KansenSakura-Radiance-RP , and the stylistic artistry of Ollpheist . 🧘 Reflective dialogue β€’ πŸ–‹οΈ Creative writing β€’ πŸ’ž Character roleplay β€” blending emotion, intellect, and style into a single expressive voice. βš™οΈ mradermacher β€” static / imatrix quantization πŸœ› DeathGodlike β€” EXL3 quants 🌟 All original authors and contributors whose models formed the foundation for this merge

NaNK
β€”
191
4

Harmonic-Moon-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/MoonMega-12B Vortex5/Shadow-Crystal-12B Vortex5/Crystal-Moon-12B Vortex5/Lunar-Nexus-12B Vortex5/Moondark-12B Vortex5/Moonviolet-12B Vortex5/Moonlit-Shadow-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:

NaNK
β€”
191
2

Scarlet Ink 12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear DELLA merge method using Vortex5/MegaMoon-Karcher-12B as a base. The following models were included in the merge: Vortex5/Vermilion-Sage-12B Vortex5/Dark-Quill-12B

NaNK
β€”
165
4

Noir-Blossom-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using mistralai/Mistral-Nemo-Instruct-2407 as a base. The following models were included in the merge: Retreatcost/KansenSakura-Erosion-RP-12b Retreatcost/KansenSakura-Eclipse-RP-12b Retreatcost/KansenSakura-Radiance-RP-12b The following YAML configuration was used to produce this model:

NaNK
β€”
161
2

MN-12B-Azure-Veil

This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: crestf411/MN-Slush SicariusSicariiStuff/ImpishNemo12B Vortex5/Moonlit-Shadow-12B anthracite-org/magnum-v4-12b The following YAML configuration was used to produce this model:

NaNK
β€”
154
3

Lunar Abyss 12B

> Born where moonlight touches the deep β€” thought meets desire, and reason dreams. Lunar-Abyss-12B was made to combine the coherency and stability of LunaMaid-12B with the evocative prose and edgy flair of Abyssal-Seraph-12B. 🧩 Base: Vortex5/MegaMoon-Karcher-12B πŸ’Ž Inputs: Vortex5/LunaMaid-12B + Vortex5/Abyssal-Seraph-12B Like moonlight reflecting on dark water, Lunar-Abyss carries both clarity and depth. It thinks with the calm focus of LunaMaid yet speaks with the emotional pulse of Abyssal-Seraph. Every response flows with a quiet duality β€” logic beneath, creativity above β€” neither overpowering the other. For fans of expressive writing and immersive roleplay, it offers a tone that’s reflective, and mysterious. Designed for narrative storytelling, introspective dialogue, and emotion-driven writing. - βš™οΈ mradermacher β€” static / imatrix quantization - πŸœ› DeathGodlike β€” EXL3 quants - 🩢 All original model authors and contributors whose work made this model possible. Models merged in this creation: - Vortex5/LunaMaid-12B - Vortex5/Abyssal-Seraph-12B - Vortex5/MegaMoon-Karcher-12B

NaNK
β€”
153
5

MS3.2 24B Fiery Lynx

This model was merged using the Linear DELLA merge method using ConicCat/Mistral-Small-3.2-AntiRep-24B as a base. The following models were included in the merge: CrucibleLab/M3.2-24B-Loki-V1.3 zerofata/MS3.2-PaintedFantasy-v2-24B Gryphe/Codex-24B-Small-3.2 The following YAML configuration was used to produce this model:

NaNK
β€”
149
4

Velvet Orchid 12B

NaNK
β€”
147
2

Vermilion-Sage-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Multi-SLERP merge method using Vortex5/Poetic-Nexus-12B as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 crestf411/MN-Slush Retreatcost/Ollpheist-12B The following YAML configuration was used to produce this model:

NaNK
β€”
139
3

MS3.2-24B-Solar-Skies-Q4_K_M-GGUF

NaNK
llama-cpp
134
0

Violet-Mist-12B-Q6_K-GGUF

Vortex5/Violet-Mist-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Violet-Mist-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
134
0

Poetic-Nexus-12B-Q6_K-GGUF

Vortex5/Poetic-Nexus-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Poetic-Nexus-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
121
0

Harmony Bird 12B

Harmony-Bird-12B is a merged model intended for roleplay and storytelling. Vermilion-Sage-12B merges with Impish-Nemo-12B using the harmonyforge method ( focus=8.0 , blend=0.7 ). Show YAML mergemethod: harmonyforge models: - model: Vortex5/Vermilion-Sage-12B - model: SicariusSicariiStuff/ImpishNemo12B parameters: focus: 8.0 blend: 0.7 tokenizer: source: union dtype: bfloat16 It is a custom adaptive merge method that blends models through consensus weighting across both spatial (parameter alignment) and frequency (spectral structure) domains. It is designed for stable, noise-resistant merges that preserve the shared strengths of multiple models while reducing conflicts and outlier effects. It supports standard and task-vector merging (when a `basemodel` is provided). How it works: The method first centers all model weights (relative to either a `basemodel` or their median) and normalizes their scales to ensure balance. It then analyzes correlations in parameter space (spatial features) and in the frequency domain (via FFT) to measure similarity and coherence between models. Each model receives a goodness score based on how well it aligns with others, adjusted by stability and outlier suppression terms. These scores are converted into normalized merge weights using a softmax function, which smoothly scales scores so that higher values receive more weight while all weights sum to 1. The `focus` parameter controls how sharply these weights are distributed β€” low focus blends models evenly, while high focus concentrates more weight on the most consistent ones. The `blend` parameter mixes how much spatial versus frequency information influences the final weighting. The merged parameters are then computed as a weighted sum across models. focus: Controls decisiveness of weighting (higher = more selective). Default: `1.0` blend: Uses a 0–1 scale to control merge emphasis β€” 0 represents full reliance on spatial similarity, meaning model weights are compared directly in parameter space. 1 represents full reliance on frequency-domain similarity, where weights are compared by their spectral (FFT) patterns. Default: 0.5 blends both equally for balanced structural and behavioral alignment. By weighting models through adaptive consensus across spatial and frequency domains, Harmony Forge emphasizes aligned, stable patternsβ€”encouraging coherent, balanced merges that often inherit the strongest traits of each source model. Team Mradermacher β€” Static & imatrix quantizations DeathGodlike β€” EXL3 quants Original model authors and contributors whose work made this model possible.

NaNK
β€”
117
3

Violet Mist 12B

β€œIn the great violet mist, your desire may take form β€” if your soul dares to seek it.” Violet-Mist-12B was forged through the SCE merge method using MergeKit . Within its core, whispers of many models intertwine β€” threads of shadow, light, and oceanic calm β€” each one leaving an imprint upon the mist: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3 nothingiisreal/MN-12B-Celeste-V1.9 Vortex5/Midnight-Ocean-12B Base: Vortex5/Lunar-Abyss-12B models: - model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3 parameters: weight: - filter: mlp value: [0.25, 0.35, 0.45, 0.55, 0.65, 0.55, 0.40, 0.30] - filter: norm value: 0.5 - value: 0.4 - model: nothingiisreal/MN-12B-Celeste-V1.9 parameters: weight: - filter: selfattn value: [0.1, 0.25, 0.45, 0.60, 0.70, 0.65, 0.45, 0.25] - filter: mlp value: 0.4 - value: 0.5 - model: Vortex5/Midnight-Ocean-12B parameters: weight: 0.4 parameters: selecttopk: 0.55 normalize: true dtype: bfloat16 mergemethod: sce basemodel: Vortex5/Lunar-Abyss-12B tokenizer: source: Vortex5/Lunar-Abyss-12B πŸ–‹οΈ Roleplay creation β€’ 🌌 Mystical dialogue β€’ πŸ’« Dreamlike storytelling β€” where darkness whispers, and desire breathes. mradermacher β€” Static & imatrix quants DeathGodlike β€” EXL3 quants Original model authors and contributors whose work made this model possible. a:hover, h1:hover, p:hover, details:hover summary { color: #ffffff !important; text-shadow: 0 0 15px rgba(220,190,255,0.9), 0 0 30px rgba(160,100,255,0.8); } details:hover { box-shadow: 0 0 20px rgba(160,100,255,0.3), inset 0 0 15px rgba(100,60,200,0.3); }

NaNK
β€”
117
3

Scarlet Ink 12B Q6 K GGUF

Vortex5/Scarlet-Ink-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Scarlet-Ink-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

NaNK
llama-cpp
102
3

Moonlit-Shadow-12B

NaNK
β€”
97
5

Harmonic Lumina 12B

.harmony-card { max-width: 1000px; margin: 40px auto; padding: 0 14px; background: radial-gradient(circle at 40% 60%, rgba(10,15,30,0.97) 0%, rgba(3,6,18,0.99) 100%); border: 1px solid rgba(120,160,255,0.25); border-radius: 16px; box-shadow: 0 0 45px rgba(110,150,255,0.18), inset 0 0 40px rgba(80,130,255,0.12), 0 0 0 2px rgba(100,130,255,0.15); color: #dbeaff; font-family: "Inter", "Segoe UI", system-ui, sans-serif; text-align: left; overflow: hidden; } .title-block { margin: 42px 0 26px 0; text-align: center; background: linear-gradient(100deg, rgba(30,40,100,0.7), rgba(80,40,150,0.6), rgba(20,90,170,0.65)); border: 1px solid rgba(170,210,255,0.25); border-radius: 14px; padding: 16px 12px; box-shadow: 0 0 25px rgba(100,140,255,0.35), inset 0 0 25px rgba(130,160,255,0.25); position: relative; overflow: hidden; } .title-block::before { content: ""; position: absolute; top: 0; left: -70%; width: 45%; height: 100%; background: linear-gradient(120deg, rgba(255,255,255,0.15), rgba(255,255,255,0)); transform: skewX(-25deg); animation: beamSlide 8s linear infinite; } @keyframes beamSlide { 0% { left: -70%; } 50% { left: 115%; } 100% { left: -70%; } } .title-block h2 { margin: 0; font-size: 1.3rem; letter-spacing: 2px; text-transform: uppercase; background: linear-gradient(90deg, #9dd6ff, #d3b2ff, #84e6ff); -webkit-background-clip: text; -webkit-text-fill-color: transparent; text-shadow: 0 0 12px rgba(150,200,255,0.6), 0 0 24px rgba(160,120,255,0.4); } .title-block:hover h2 { text-shadow: 0 0 14px rgba(190,240,255,0.8), 0 0 28px rgba(190,150,255,0.9); } .harmony-card pre { background: rgba(8,12,24,0.9); color: #e7f5ff; border-left: 3px solid rgba(150,200,255,0.6); border-radius: 10px; padding: 14px; font-size: 13px; overflow-x: auto; box-shadow: inset 0 0 10px rgba(120,160,255,0.15); } details summary { cursor: pointer; font-weight: 600; color: #c2d8ff; margin: 10px 0; } .harmony-card a { color: #7edfff; text-decoration: none; transition: color 0.25s ease, text-shadow 0.25s ease; } .harmony-card a:hover { color: #ffdca8; text-shadow: 0 0 8px rgba(255,240,180,0.7); } .harmony-divider { height: 10px; max-width: 900px; margin: 30px auto; background: radial-gradient(circle, rgba(110,150,255,0.2), rgba(30,40,80,0.9)); border-radius: 50%; box-shadow: inset 0 0 25px rgba(100,150,255,0.25); } 01 // Overview Harmonic-Lumina-12B is a model merged using a custom harmonyprism method β€” It merges Harmony-Bird-12B , Violet-Mist-12B , and Luminous-Shadow-12B . 02 // Custom Merge Method A merge algorithm that aligns models across structural (spatial) and energetic (style/variance) domains. It performs stochastic coherence samplingβ€”random block analysis of parameter deltasβ€”to measure local structure and energy similarity. Using these signals, it adaptively adjusts per-model weights through entropy-stabilized softmax and a decaying EMA center, achieving smooth, artifact-free convergence that preserves each model’s β€œresonance” while unifying tone and logic. focus β€” Controls decisiveness of weighting; higher = stronger emphasis on the most coherent contributors. (Global or per-model.). blend β€” Balances between spatial structure (0) and energy signature (1), determining whether the merge favors logic/shape or expressive style. (Global or per-model.) maxgoodness β€” Convergence threshold for coherence optimization; the process stops when this target is reached. refinementsteps β€” Number of refinement passes; higher values yield smoother and more unified results at the cost of time. Show YAML models: - model: Vortex5/Harmony-Bird-12B - model: Vortex5/Violet-Mist-12B - model: Vortex5/Luminous-Shadow-12B mergemethod: harmonyprism dtype: bfloat16 parameters: focus: 1.3 blend: 0.55 maxgoodness: 0.98 refinementsteps: 300 tokenizer: source: Vortex5/Harmony-Bird-12B πŸ’  Team Mradermacher β€” Static & imatrix quants 🌌 DeathGodlike β€” EXL3 quants 🌠 Original creators and model authors

NaNK
β€”
87
3

Poetic-Rune-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear DELLA merge method using mistralai/Mistral-Nemo-Instruct-2407 as a base. The following models were included in the merge: LatitudeGames/Wayfarer-2-12B Epiculous/VioletTwilight-v0.2 inflatebot/MN-12B-Mag-Mell-R1 cgato/Nemo-12b-Humanize-SFT-v0.2.5-KTO The following YAML configuration was used to produce this model:

NaNK
β€”
87
2

Vortex 5 Mega Moon Karcher 12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: yamatazen/NeonMaid-12B-v2 Epiculous/VioletTwilight-v0.2 Vortex5/Moonlit-Shadow-12B LatitudeGames/Wayfarer-12B anthracite-org/magnum-v4-12b Vortex5/Harmonic-Moon-12B SicariusSicariiStuff/ImpishNemo12B LatitudeGames/Muse-12B inflatebot/MN-12B-Mag-Mell-R1 Vortex5/Crystal-Moon-12B Vortex5/Lunar-Nexus-12B Nitral-AI/Captai...

NaNK
β€”
81
3

Prototype X 12b Q6 K GGUF

NaNK
llama-cpp
80
1

Crimson Twilight 12B

Crimson-Twilight-12B is a multistage merge designed for narrative roleplay. Abyssal-Seraph-12B merges with Lunar-Abyss-12B using nearswap ( t=0.0008 ). Show YAML name: First models: - model: Vortex5/Abyssal-Seraph-12B mergemethod: nearswap basemodel: Vortex5/Lunar-Abyss-12B parameters: t: 0.0008 dtype: bfloat16 Moonlit-Shadow-12B merges with Luminous-Shadow-12B via slerp ( t=0.5 ). Show YAML name: Second models: - model: Vortex5/Moonlit-Shadow-12B mergemethod: slerp basemodel: Vortex5/Luminous-Shadow-12B parameters: t: 0.5 dtype: bfloat16 The two intermediates are then merged via the Karcher mean method. models: - model: First - model: Second mergemethod: karcher dtype: bfloat16 parameters: tol: 1e-9 maxiter: 35000 tokenizer: source: union Team Mradermacher β€” Static & imatrix quantizations DeathGodlike β€” EXL3 quants

NaNK
β€”
78
5

Lunar-Abyss-12B-Q6_K-GGUF

Vortex5/Lunar-Abyss-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Lunar-Abyss-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
74
0

Abyssal-Seraph-12B-Q6_K-GGUF

Vortex5/Abyssal-Seraph-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Abyssal-Seraph-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
67
0

Nova-Mythra-12B

NaNK
β€”
66
4

Harmonic-Lumina-12B-Q6_K-GGUF

NaNK
llama-cpp
65
0

Velvet Orchid 12B Q6 K GGUF

Vortex5/Velvet-Orchid-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Velvet-Orchid-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
64
1

LunaMaid 12B Q6 K GGUF

Vortex5/LunaMaid-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/LunaMaid-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
63
1

Midnight-Ocean-12B-Q6_K-GGUF

Vortex5/Midnight-Ocean-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Midnight-Ocean-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
63
0

Midnight-Ocean-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/Noir-Blossom-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:

NaNK
β€”
56
1

Noir-Blossom-12B-Q6_K-GGUF

Vortex5/Noir-Blossom-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Noir-Blossom-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
52
0

Prototype X 12b

.blackforge-shell { max-width: 1000px; margin: 40px auto; padding: 0 14px; background: radial-gradient(circle at 40% 60%, rgba(0,3,0,0.96) 0%, rgba(0,0,0,0.99) 100%); border: 2px solid rgba(0,160,70,0.35); border-radius: 16px; box-shadow: 0 0 60px rgba(0,160,70,0.25), inset 0 0 50px rgba(0,120,50,0.2), 0 0 0 3px rgba(0,100,40,0.25); color: #d0ffd6; font-family: "Inter", "Segoe UI", system-ui, sans-serif; text-align: left; overflow: hidden; } .title-block { margin: 42px 0 26px 0; text-align: center; background: linear-gradient(100deg, rgba(0,50,0,0.7), rgba(0,110,40,0.6), rgba(0,150,100,0.65)); border: 1px solid rgba(0,200,100,0.25); border-radius: 14px; padding: 16px 12px; box-shadow: 0 0 25px rgba(0,160,70,0.35), inset 0 0 25px rgba(0,180,100,0.25); position: relative; overflow: hidden; } .title-block::before { content: ""; position: absolute; top: 0; left: -70%; width: 45%; height: 100%; background: linear-gradient(120deg, rgba(255,255,255,0.15), rgba(255,255,255,0)); transform: skewX(-25deg); animation: beamSlide 8s linear infinite; } @keyframes beamSlide { 0% { left: -70%; } 50% { left: 115%; } 100% { left: -70%; } } .title-block h2 { margin: 0; font-size: 1.3rem; letter-spacing: 2px; text-transform: uppercase; background: linear-gradient(90deg, #42ff90, #96ffb2, #2aff74); -webkit-background-clip: text; -webkit-text-fill-color: transparent; text-shadow: 0 0 12px rgba(70,255,130,0.6), 0 0 24px rgba(50,255,100,0.4); } .title-block:hover h2 { text-shadow: 0 0 14px rgba(120,255,180,0.8), 0 0 28px rgba(90,255,150,0.9); } .blackforge-shell pre { background: rgba(0,5,0,0.9); color: #e9ffed; border-left: 4px solid rgba(0,200,90,0.7); border-radius: 10px; padding: 14px; font-size: 13px; overflow-x: auto; box-shadow: inset 0 0 12px rgba(0,255,120,0.18); } details summary { cursor: pointer; font-weight: 600; color: #b5ffd1; margin: 10px 0; } .blackforge-shell a { color: #7effa9; text-decoration: none; transition: color 0.25s ease, text-shadow 0.25s ease; } .blackforge-shell a:hover { color: #baffd7; text-shadow: 0 0 8px rgba(170,255,200,0.7); } .blackforge-divider { height: 10px; max-width: 900px; margin: 30px auto; background: radial-gradient(circle, rgba(0,255,100,0.25), rgba(0,20,0,0.9)); border-radius: 50%; box-shadow: inset 0 0 30px rgba(0,255,100,0.3); } 01 // Overview Prototype-X-12b is a model merged using a custom flowforge method β€” it merges KansenSakura-Eclipse-RP-12B and KansenSakura-Radiance-RP-12B , with KansenSakura-Erosion-RP-12B as the base model. 02 // Custom Merge Method It is a directional, coherence-aware merge algorithm that moves a base model along the weighted consensus direction defined by its donors rather than averaging them directly. Each donor’s influence is determined by its relative energy (the magnitude of its weight differences from the base), and the method normalizes and scales these offsets to preserve numerical stability. A small orthogonal adjustment prevents collapse when donors are highly similar, while the strength , trust , and topk parameters control how far and how selectively the merge travels through parameter space. The result is a controlled shift in model behavior that reflects donor characteristics without discarding the base model’s underlying structure. Show YAML mergemethod: flowforge models: - model: Retreatcost/KansenSakura-Eclipse-RP-12b - model: Retreatcost/KansenSakura-Radiance-RP-12b basemodel: Retreatcost/KansenSakura-Erosion-RP-12b parameters: strength: 0.8 trust: 1.0 dtype: bfloat16 tokenizer: source: Retreatcost/KansenSakura-Erosion-RP-12b Team Mradermacher β€” Static & imatrix quants DeathGodlike β€” EXL3 quants Original creators and model authors

NaNK
β€”
48
5

Moonlit-Shadow-12B-Q4_K_M-GGUF

NaNK
llama-cpp
48
0

Dark Quill 12B Q6 K GGUF

Vortex5/Dark-Quill-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Dark-Quill-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
46
1

Crimson-Twilight-12B-Q6_K-GGUF

Vortex5/Crimson-Twilight-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Crimson-Twilight-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
46
0

Sunlit-Shadow-12B-Q6_K-GGUF

NaNK
llama-cpp
44
0

Harmony-Bird-12B-Q6_K-GGUF

Vortex5/Harmony-Bird-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Harmony-Bird-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
44
0

Lunar-Nexus-12B

NaNK
β€”
42
7

Poetic-Rune-12B-Q6_K-GGUF

Vortex5/Poetic-Rune-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Poetic-Rune-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
41
1

Luminous-Shadow-12B-Q6_K-GGUF

Vortex5/Luminous-Shadow-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Luminous-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
37
0

MS3.2 24B Solar Skies

> Bright minds under boundless skies β€” where every conversation becomes a sunrise of imagination MS3.2-24B-Solar-Skies merge of pre-trained language models created using MergeKit. It draws upon the intellectual density of The Omega Directive, the expressive prose of Fiery Lynx, and the measured balance of Chaos Skies. 🧩 Models: - 🧠 ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0 - πŸ”₯ Vortex5/MS3.2-24B-Fiery-Lynx - 🌌 Vortex5/MS3.2-24B-Chaos-Skies 🎭 Intended Use | Category | Description | |-----------|--------------| | 🧘 Reflective Dialogue | Ideal for introspective or philosophical discussions, exploring abstract and emotional topics. | | πŸ–‹οΈ Creative Writing | Excels at expressive prose, narrative storytelling, and immersive worldbuilding. | | 🧠 Analytical Reasoning | Balances logic and creativity for insightful, stylistically nuanced explanations. | | πŸ’ž Character Roleplay | Adapts fluidly to emotional, character-driven interactions and narrative depth. | - πŸ’« All original model authors and contributors whose work formed the foundation for this merge.

NaNK
β€”
36
4

Dark Quill 12B

NaNK
β€”
35
2

MS3.2-24B-Fiery-Lynx-Q4_K_M-GGUF

NaNK
llama-cpp
35
0

Poetic-Nexus-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/Poetic-Rune-12B Vortex5/Lunar-Nexus-12B The following YAML configuration was used to produce this model:

NaNK
β€”
33
1

Darkest-Grimoire-12B

NaNK
β€”
30
4

Crystal-Moon-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/Lunar-Nexus-12B Vortex5/Moondark-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:

NaNK
β€”
28
2

MN-14B-Crimson-Veil-Q5_K_S-GGUF

Vortex5/MN-14B-Crimson-Veil-Q5KS-GGUF This model was converted to GGUF format from `Vortex5/MN-14B-Crimson-Veil` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
26
0

Vermilion-Sage-12B-Q6_K-GGUF

Vortex5/Vermilion-Sage-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Vermilion-Sage-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
25
0

MegaMoon-Karcher-12B-Q6_K-GGUF

Vortex5/MegaMoon-Karcher-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/MegaMoon-Karcher-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
24
0

Shadow-Crystal-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Vortex5/Moonlit-Shadow-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:

NaNK
β€”
23
1

Drifting-Shadow-12B-Q6_K-GGUF

Vortex5/Drifting-Shadow-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Drifting-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
21
0

Crystal-Ocean-12B-Q4_K_M-GGUF

Vortex5/Crystal-Ocean-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Crystal-Ocean-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
20
1

MS3.2-24B-Chaos-Skies-Q4_K_M-GGUF

NaNK
llama-cpp
19
1

Crystal-Ocean-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Epiculous/VioletTwilight-v0.2 as a base. The following models were included in the merge: crestf411/MN-Slush anthracite-org/magnum-v2-12b LatitudeGames/Wayfarer-12B The following YAML configuration was used to produce this model:

NaNK
β€”
16
1

Moondark-12B-Q4_K_M-GGUF

Vortex5/Moondark-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Moondark-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
16
0

NovaSage-24B-Q4_K_M-GGUF

NaNK
llama-cpp
14
0

Amber-Starlight-12B

NaNK
license:apache-2.0
12
0

MS3.2-24B-Chaos-Skies

NaNK
license:apache-2.0
11
4

Moonviolet-12B-Q4_K_M-GGUF

Vortex5/Moonviolet-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Moonviolet-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
11
1

Moonlit-Shadow-12B-Q6_K-GGUF

NaNK
llama-cpp
11
0

Moonbright-12B-Q4_K_M-GGUF

Vortex5/Moonbright-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Moonbright-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
9
1

MoonMega-12B-Q4_K_M-GGUF

NaNK
llama-cpp
9
1

VoidRose-24B-Q4_K_M-GGUF

NaNK
llama-cpp
9
0

Lunar-Nexus-12B-Q6_K-GGUF

Vortex5/Lunar-Nexus-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Lunar-Nexus-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
0

Radiant-Shadow-12B

This is a merge of pre-trained language models created using mergekit. πŸ“’Notes: I had some issues with chatml instruction template, try Mistral V7 works well. This model was merged using the Passthrough merge method. The following models were included in the merge: Retreatcost/KansenSakura-Radiance-RP-12b Vortex5/Lunar-Nexus-12B Vortex5/Shadow-Crystal-12B The following YAML configuration was used to produce this model:

NaNK
β€”
7
4

MS3.2-24B-Penumbra-Aether

NaNK
β€”
7
1

Sunlit-Shadow-12B

NaNK
β€”
7
1

ChaosRose-24B

NaNK
license:apache-2.0
7
1

WittyAthena-24b-Q4_K_M-GGUF

NaNK
llama-cpp
7
0

MS3.2-24B-Stellar-Skies-Q4_K_M-GGUF

NaNK
llama-cpp
7
0

MS3.2-24B-Astral-Mirage

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using zerofata/MS3.2-PaintedFantasy-24B as a base. The following models were included in the merge: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond Gryphe/Codex-24B-Small-3.2 The following YAML configuration was used to produce this model:

NaNK
β€”
6
2

MS3.2-24B-Astral-Revenant

NaNK
license:apache-2.0
6
2

MN-12B-Azure-Veil-Q6_K-GGUF

Vortex5/MN-12B-Azure-Veil-Q6K-GGUF This model was converted to GGUF format from `Vortex5/MN-12B-Azure-Veil` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
6
1

Drifting-Shadow-12B

Drifting-Shadow-12B This is a merge of pre-trained language models created using mergekit. Merge Details Merge Method This model was merged using the Passthrough merge method. The following models were included in the merge: Vortex5/Noir-Blossom-12B Vortex5/Moonlit-Shadow-12B The following YAML configuration was used to produce this model:

NaNK
β€”
6
1

WittyAthena-24b

WittyAthena-24b is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method using arcee-ai/Arcee-Blitz as a base. The following models were included in the merge: Vortex5/Clockwork-Flower-24B TheDrummer/Cydonia-24B-v3 The following YAML configuration was used to produce this model:

NaNK
β€”
6
0

ChaosFlowerRP-24B

ChaosFlowerRP-24B is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using trashpanda-org/MS-24B-Instruct-Mullein-v0 as a base. The following models were included in the merge: h34v7/DansXPantheon-RP-Engine-V1.2-24b-Small-Instruct-Ties-Merge OddTheGreat/Apparatus24B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
5
2

Violet-Starlight-12B

NaNK
license:apache-2.0
5
0

LuckyRP-24B

NaNK
license:apache-2.0
5
0

Qwen2.5-14B-Styx

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated as a base. The following models were included in the merge: SicariusSicariiStuff/ImpishQWEN14B-1M ReadyArt/Omega-DarkerThe-Final-Directive-14B Sao10K/14B-Qwen2.5-Kunou-v1 v000000/Qwen2.5-Lumen-14B The following YAML configuration was used to produce this model:

NaNK
β€”
4
1

Mystic-Rune-v2-12B-Q6_K-GGUF

Vortex5/Mystic-Rune-v2-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Mystic-Rune-v2-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
1

VoidRose-24B

NaNK
β€”
4
0

MN-Mystic-Rune-12B-Q6_K-GGUF

NaNK
llama-cpp
3
1

Lunar-Nexus-12B-Q4_K_M-GGUF

Vortex5/Lunar-Nexus-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Lunar-Nexus-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
1

Clockwork-Flower-24B

Clockwork-Flower-24B is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: OddTheGreat/Cogwheel24bV.2 Vortex5/ChaosFlowerRP-24B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
3
0

NovaSage-24B

NovaSage-24B is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Vortex5/WittyAthena-24b as a base. The following models were included in the merge: Vortex5/VoidRose-24B Gryphe/Pantheon-RP-1.8-24b-Small-3.1 aixonlab/Eurydice-24b-v3.5 trashpanda-org/MS-24B-Instruct-Mullein-v0 LatitudeGames/Harbinger-24B TheDrummer/Cydonia-24B-v3 The following YAML configuration was used to produce this model:

NaNK
β€”
3
0

MS3.2-24B-Omega-Diamond

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1 The following YAML configuration was used to produce this model:

NaNK
β€”
3
0

Shadow-Crystal-12B-Q6_K-GGUF

Vortex5/Shadow-Crystal-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Shadow-Crystal-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
0

Moonviolet-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Nitral-AI/Captain-ErisViolet-V0.420-12B Vortex5/Moondark-12B The following YAML configuration was used to produce this model:

NaNK
β€”
2
3

MN-Mystic-Rune-12B

NaNK
β€”
2
3

Moondark-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: flammenai/Mahou-1.5-mistral-nemo-12B Delta-Vector/Ohashi-NeMo-12B HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 The following YAML configuration was used to produce this model:

NaNK
β€”
2
2

MoonMega-12B

NaNK
β€”
2
2

Mystic-Rune-v2-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: Vortex5/MN-Mystic-Rune-12B The following YAML configuration was used to produce this model:

NaNK
β€”
2
2

MS3.2-24B-Chaos-Mirage-nearswap

This is a merge of pre-trained language models created using mergekit. This model was merged using the NearSwap merge method using Vortex5/MS3.2-24B-Astral-Mirage as a base. The following models were included in the merge: Vortex5/MS3.2-24B-Chaos-Skies The following YAML configuration was used to produce this model:

NaNK
β€”
2
1

Moonbright-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 Delta-Vector/Ohashi-NeMo-12B The following YAML configuration was used to produce this model:

NaNK
β€”
2
1

Astral-Noctra-12B

NaNK
β€”
2
0

Mystic-Rune-v2-12B-Q4_K_M-GGUF

Vortex5/Mystic-Rune-v2-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Mystic-Rune-v2-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Shadow-Crystal-12B-Q4_K_M-GGUF

Vortex5/Shadow-Crystal-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Shadow-Crystal-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Radiant-Shadow-12B-Q4_K_M-GGUF

Vortex5/Radiant-Shadow-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Radiant-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Harmonic-Moon-12B-Q4_K_M-GGUF

Vortex5/Harmonic-Moon-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Harmonic-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

SpicyFlyRP-22B

NaNK
β€”
1
2

Gilded-Tempest-12B

Gilded-Tempest-12B is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using elinas/Chronos-Gold-12B-1.0 as a base. The following models were included in the merge: Nitral-AI/Captain-ErisViolet-V0.420-12B FallenMerick/MN-Violet-Lotus-12B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
1
2

Chaos-Cydonia-24B

Chaos-Cydonia-24B is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-24B-Instruct-2501 as a base. The following models were included in the merge: Vortex5/ChaosRose-24B TheDrummer/Cydonia-24B-v3 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
1
2

Radiant-Shadow-12B-Q6_K-GGUF

Vortex5/Radiant-Shadow-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Radiant-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
1

ChaosFlowerRP-24B-Q4_K_M-GGUF

Vortex5/ChaosFlowerRP-24B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/ChaosFlowerRP-24B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

MN-Mystic-Rune-12B-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

Crystal-Moon-12B-Q6_K-GGUF

Vortex5/Crystal-Moon-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Crystal-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

Crystal-Moon-12B-Q4_K_M-GGUF

Vortex5/Crystal-Moon-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Crystal-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

Harmonic-Moon-12B-Q6_K-GGUF

Vortex5/Harmonic-Moon-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Harmonic-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

Stellar-Umbra-12B

NaNK
β€”
0
1

MS3.2-24B-Stellar-Skies

NaNK
β€”
0
1

MS3.2-24B-Chaos-Mirage-nearswap-Q4_K_M-GGUF

Vortex5/MS3.2-24B-Chaos-Mirage-nearswap-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/MS3.2-24B-Chaos-Mirage-nearswap` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1