Vortex5
LunaMaid 12B
This is a multi-stage merge of pre-trained language models created using mergekit. LunaMaid-12B was produced through a two-stage multi-model merge using MergeKit. Each stage fuses models with complementary linguistic and stylistic traits to create a cohesive, emotionally nuanced personality. π©΅ Stage 1 β Slerp Merge (Intermediate Model `First`)
Abyssal Seraph 12B
> Where the light of the divine meets the poetry of the abyss. Abyssal-Seraph-12B is a multi-stage creative merge designed for expressive storytelling, emotional depth, and lyrical dialogue. It was crafted through a layered fusion using MergeKit: 1. π LunaMaid Γ Vermilion-Sage β merged via NearSwap (`t=0.0008`) to unify LunaMaidβs balanced composure with Vermilion-Sageβs radiant prose. 2. π―οΈ Dark-Quill Γ Mag-Mell-R1 β merged via NearSwap (`t=0.0008`) to draw forth mysticism, poetic darkness...
MN-14B-Crimson-Veil
This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: SicariusSicariiStuff/ImpishNemo12B anthracite-org/magnum-v4-12b Vortex5/Moonlit-Shadow-12B crestf411/MN-Slush The following YAML configuration was used to produce this model:
Luminous Shadow 12B
βWithin the deepest shadow, the brightest light awaits.β Luminous-Shadow-12B was merged using the DELLA merge method via MergeKit , balancing ethereal creativity and reasoned coherence. It draws from the expressive nature of Shadow-Crystal , the refined structure of KansenSakura-Radiance-RP , and the stylistic artistry of Ollpheist . π§ Reflective dialogue β’ ποΈ Creative writing β’ π Character roleplay β blending emotion, intellect, and style into a single expressive voice. βοΈ mradermacher β static / imatrix quantization π DeathGodlike β EXL3 quants π All original authors and contributors whose models formed the foundation for this merge
Harmonic-Moon-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/MoonMega-12B Vortex5/Shadow-Crystal-12B Vortex5/Crystal-Moon-12B Vortex5/Lunar-Nexus-12B Vortex5/Moondark-12B Vortex5/Moonviolet-12B Vortex5/Moonlit-Shadow-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:
Scarlet Ink 12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear DELLA merge method using Vortex5/MegaMoon-Karcher-12B as a base. The following models were included in the merge: Vortex5/Vermilion-Sage-12B Vortex5/Dark-Quill-12B
Noir-Blossom-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using mistralai/Mistral-Nemo-Instruct-2407 as a base. The following models were included in the merge: Retreatcost/KansenSakura-Erosion-RP-12b Retreatcost/KansenSakura-Eclipse-RP-12b Retreatcost/KansenSakura-Radiance-RP-12b The following YAML configuration was used to produce this model:
MN-12B-Azure-Veil
This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: crestf411/MN-Slush SicariusSicariiStuff/ImpishNemo12B Vortex5/Moonlit-Shadow-12B anthracite-org/magnum-v4-12b The following YAML configuration was used to produce this model:
Lunar Abyss 12B
> Born where moonlight touches the deep β thought meets desire, and reason dreams. Lunar-Abyss-12B was made to combine the coherency and stability of LunaMaid-12B with the evocative prose and edgy flair of Abyssal-Seraph-12B. π§© Base: Vortex5/MegaMoon-Karcher-12B π Inputs: Vortex5/LunaMaid-12B + Vortex5/Abyssal-Seraph-12B Like moonlight reflecting on dark water, Lunar-Abyss carries both clarity and depth. It thinks with the calm focus of LunaMaid yet speaks with the emotional pulse of Abyssal-Seraph. Every response flows with a quiet duality β logic beneath, creativity above β neither overpowering the other. For fans of expressive writing and immersive roleplay, it offers a tone thatβs reflective, and mysterious. Designed for narrative storytelling, introspective dialogue, and emotion-driven writing. - βοΈ mradermacher β static / imatrix quantization - π DeathGodlike β EXL3 quants - π©Ά All original model authors and contributors whose work made this model possible. Models merged in this creation: - Vortex5/LunaMaid-12B - Vortex5/Abyssal-Seraph-12B - Vortex5/MegaMoon-Karcher-12B
MS3.2 24B Fiery Lynx
This model was merged using the Linear DELLA merge method using ConicCat/Mistral-Small-3.2-AntiRep-24B as a base. The following models were included in the merge: CrucibleLab/M3.2-24B-Loki-V1.3 zerofata/MS3.2-PaintedFantasy-v2-24B Gryphe/Codex-24B-Small-3.2 The following YAML configuration was used to produce this model:
Velvet Orchid 12B
Vermilion-Sage-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Multi-SLERP merge method using Vortex5/Poetic-Nexus-12B as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 crestf411/MN-Slush Retreatcost/Ollpheist-12B The following YAML configuration was used to produce this model:
MS3.2-24B-Solar-Skies-Q4_K_M-GGUF
Violet-Mist-12B-Q6_K-GGUF
Vortex5/Violet-Mist-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Violet-Mist-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Poetic-Nexus-12B-Q6_K-GGUF
Vortex5/Poetic-Nexus-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Poetic-Nexus-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Harmony Bird 12B
Harmony-Bird-12B is a merged model intended for roleplay and storytelling. Vermilion-Sage-12B merges with Impish-Nemo-12B using the harmonyforge method ( focus=8.0 , blend=0.7 ). Show YAML mergemethod: harmonyforge models: - model: Vortex5/Vermilion-Sage-12B - model: SicariusSicariiStuff/ImpishNemo12B parameters: focus: 8.0 blend: 0.7 tokenizer: source: union dtype: bfloat16 It is a custom adaptive merge method that blends models through consensus weighting across both spatial (parameter alignment) and frequency (spectral structure) domains. It is designed for stable, noise-resistant merges that preserve the shared strengths of multiple models while reducing conflicts and outlier effects. It supports standard and task-vector merging (when a `basemodel` is provided). How it works: The method first centers all model weights (relative to either a `basemodel` or their median) and normalizes their scales to ensure balance. It then analyzes correlations in parameter space (spatial features) and in the frequency domain (via FFT) to measure similarity and coherence between models. Each model receives a goodness score based on how well it aligns with others, adjusted by stability and outlier suppression terms. These scores are converted into normalized merge weights using a softmax function, which smoothly scales scores so that higher values receive more weight while all weights sum to 1. The `focus` parameter controls how sharply these weights are distributed β low focus blends models evenly, while high focus concentrates more weight on the most consistent ones. The `blend` parameter mixes how much spatial versus frequency information influences the final weighting. The merged parameters are then computed as a weighted sum across models. focus: Controls decisiveness of weighting (higher = more selective). Default: `1.0` blend: Uses a 0β1 scale to control merge emphasis β 0 represents full reliance on spatial similarity, meaning model weights are compared directly in parameter space. 1 represents full reliance on frequency-domain similarity, where weights are compared by their spectral (FFT) patterns. Default: 0.5 blends both equally for balanced structural and behavioral alignment. By weighting models through adaptive consensus across spatial and frequency domains, Harmony Forge emphasizes aligned, stable patternsβencouraging coherent, balanced merges that often inherit the strongest traits of each source model. Team Mradermacher β Static & imatrix quantizations DeathGodlike β EXL3 quants Original model authors and contributors whose work made this model possible.
Violet Mist 12B
βIn the great violet mist, your desire may take form β if your soul dares to seek it.β Violet-Mist-12B was forged through the SCE merge method using MergeKit . Within its core, whispers of many models intertwine β threads of shadow, light, and oceanic calm β each one leaving an imprint upon the mist: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3 nothingiisreal/MN-12B-Celeste-V1.9 Vortex5/Midnight-Ocean-12B Base: Vortex5/Lunar-Abyss-12B models: - model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3 parameters: weight: - filter: mlp value: [0.25, 0.35, 0.45, 0.55, 0.65, 0.55, 0.40, 0.30] - filter: norm value: 0.5 - value: 0.4 - model: nothingiisreal/MN-12B-Celeste-V1.9 parameters: weight: - filter: selfattn value: [0.1, 0.25, 0.45, 0.60, 0.70, 0.65, 0.45, 0.25] - filter: mlp value: 0.4 - value: 0.5 - model: Vortex5/Midnight-Ocean-12B parameters: weight: 0.4 parameters: selecttopk: 0.55 normalize: true dtype: bfloat16 mergemethod: sce basemodel: Vortex5/Lunar-Abyss-12B tokenizer: source: Vortex5/Lunar-Abyss-12B ποΈ Roleplay creation β’ π Mystical dialogue β’ π« Dreamlike storytelling β where darkness whispers, and desire breathes. mradermacher β Static & imatrix quants DeathGodlike β EXL3 quants Original model authors and contributors whose work made this model possible. a:hover, h1:hover, p:hover, details:hover summary { color: #ffffff !important; text-shadow: 0 0 15px rgba(220,190,255,0.9), 0 0 30px rgba(160,100,255,0.8); } details:hover { box-shadow: 0 0 20px rgba(160,100,255,0.3), inset 0 0 15px rgba(100,60,200,0.3); }
Scarlet Ink 12B Q6 K GGUF
Vortex5/Scarlet-Ink-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Scarlet-Ink-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Moonlit-Shadow-12B
Harmonic Lumina 12B
.harmony-card { max-width: 1000px; margin: 40px auto; padding: 0 14px; background: radial-gradient(circle at 40% 60%, rgba(10,15,30,0.97) 0%, rgba(3,6,18,0.99) 100%); border: 1px solid rgba(120,160,255,0.25); border-radius: 16px; box-shadow: 0 0 45px rgba(110,150,255,0.18), inset 0 0 40px rgba(80,130,255,0.12), 0 0 0 2px rgba(100,130,255,0.15); color: #dbeaff; font-family: "Inter", "Segoe UI", system-ui, sans-serif; text-align: left; overflow: hidden; } .title-block { margin: 42px 0 26px 0; text-align: center; background: linear-gradient(100deg, rgba(30,40,100,0.7), rgba(80,40,150,0.6), rgba(20,90,170,0.65)); border: 1px solid rgba(170,210,255,0.25); border-radius: 14px; padding: 16px 12px; box-shadow: 0 0 25px rgba(100,140,255,0.35), inset 0 0 25px rgba(130,160,255,0.25); position: relative; overflow: hidden; } .title-block::before { content: ""; position: absolute; top: 0; left: -70%; width: 45%; height: 100%; background: linear-gradient(120deg, rgba(255,255,255,0.15), rgba(255,255,255,0)); transform: skewX(-25deg); animation: beamSlide 8s linear infinite; } @keyframes beamSlide { 0% { left: -70%; } 50% { left: 115%; } 100% { left: -70%; } } .title-block h2 { margin: 0; font-size: 1.3rem; letter-spacing: 2px; text-transform: uppercase; background: linear-gradient(90deg, #9dd6ff, #d3b2ff, #84e6ff); -webkit-background-clip: text; -webkit-text-fill-color: transparent; text-shadow: 0 0 12px rgba(150,200,255,0.6), 0 0 24px rgba(160,120,255,0.4); } .title-block:hover h2 { text-shadow: 0 0 14px rgba(190,240,255,0.8), 0 0 28px rgba(190,150,255,0.9); } .harmony-card pre { background: rgba(8,12,24,0.9); color: #e7f5ff; border-left: 3px solid rgba(150,200,255,0.6); border-radius: 10px; padding: 14px; font-size: 13px; overflow-x: auto; box-shadow: inset 0 0 10px rgba(120,160,255,0.15); } details summary { cursor: pointer; font-weight: 600; color: #c2d8ff; margin: 10px 0; } .harmony-card a { color: #7edfff; text-decoration: none; transition: color 0.25s ease, text-shadow 0.25s ease; } .harmony-card a:hover { color: #ffdca8; text-shadow: 0 0 8px rgba(255,240,180,0.7); } .harmony-divider { height: 10px; max-width: 900px; margin: 30px auto; background: radial-gradient(circle, rgba(110,150,255,0.2), rgba(30,40,80,0.9)); border-radius: 50%; box-shadow: inset 0 0 25px rgba(100,150,255,0.25); } 01 // Overview Harmonic-Lumina-12B is a model merged using a custom harmonyprism method β It merges Harmony-Bird-12B , Violet-Mist-12B , and Luminous-Shadow-12B . 02 // Custom Merge Method A merge algorithm that aligns models across structural (spatial) and energetic (style/variance) domains. It performs stochastic coherence samplingβrandom block analysis of parameter deltasβto measure local structure and energy similarity. Using these signals, it adaptively adjusts per-model weights through entropy-stabilized softmax and a decaying EMA center, achieving smooth, artifact-free convergence that preserves each modelβs βresonanceβ while unifying tone and logic. focus β Controls decisiveness of weighting; higher = stronger emphasis on the most coherent contributors. (Global or per-model.). blend β Balances between spatial structure (0) and energy signature (1), determining whether the merge favors logic/shape or expressive style. (Global or per-model.) maxgoodness β Convergence threshold for coherence optimization; the process stops when this target is reached. refinementsteps β Number of refinement passes; higher values yield smoother and more unified results at the cost of time. Show YAML models: - model: Vortex5/Harmony-Bird-12B - model: Vortex5/Violet-Mist-12B - model: Vortex5/Luminous-Shadow-12B mergemethod: harmonyprism dtype: bfloat16 parameters: focus: 1.3 blend: 0.55 maxgoodness: 0.98 refinementsteps: 300 tokenizer: source: Vortex5/Harmony-Bird-12B π Team Mradermacher β Static & imatrix quants π DeathGodlike β EXL3 quants π Original creators and model authors
Poetic-Rune-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear DELLA merge method using mistralai/Mistral-Nemo-Instruct-2407 as a base. The following models were included in the merge: LatitudeGames/Wayfarer-2-12B Epiculous/VioletTwilight-v0.2 inflatebot/MN-12B-Mag-Mell-R1 cgato/Nemo-12b-Humanize-SFT-v0.2.5-KTO The following YAML configuration was used to produce this model:
Vortex 5 Mega Moon Karcher 12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: yamatazen/NeonMaid-12B-v2 Epiculous/VioletTwilight-v0.2 Vortex5/Moonlit-Shadow-12B LatitudeGames/Wayfarer-12B anthracite-org/magnum-v4-12b Vortex5/Harmonic-Moon-12B SicariusSicariiStuff/ImpishNemo12B LatitudeGames/Muse-12B inflatebot/MN-12B-Mag-Mell-R1 Vortex5/Crystal-Moon-12B Vortex5/Lunar-Nexus-12B Nitral-AI/Captai...
Prototype X 12b Q6 K GGUF
Crimson Twilight 12B
Crimson-Twilight-12B is a multistage merge designed for narrative roleplay. Abyssal-Seraph-12B merges with Lunar-Abyss-12B using nearswap ( t=0.0008 ). Show YAML name: First models: - model: Vortex5/Abyssal-Seraph-12B mergemethod: nearswap basemodel: Vortex5/Lunar-Abyss-12B parameters: t: 0.0008 dtype: bfloat16 Moonlit-Shadow-12B merges with Luminous-Shadow-12B via slerp ( t=0.5 ). Show YAML name: Second models: - model: Vortex5/Moonlit-Shadow-12B mergemethod: slerp basemodel: Vortex5/Luminous-Shadow-12B parameters: t: 0.5 dtype: bfloat16 The two intermediates are then merged via the Karcher mean method. models: - model: First - model: Second mergemethod: karcher dtype: bfloat16 parameters: tol: 1e-9 maxiter: 35000 tokenizer: source: union Team Mradermacher β Static & imatrix quantizations DeathGodlike β EXL3 quants
Lunar-Abyss-12B-Q6_K-GGUF
Vortex5/Lunar-Abyss-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Lunar-Abyss-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Abyssal-Seraph-12B-Q6_K-GGUF
Vortex5/Abyssal-Seraph-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Abyssal-Seraph-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Nova-Mythra-12B
Harmonic-Lumina-12B-Q6_K-GGUF
Velvet Orchid 12B Q6 K GGUF
Vortex5/Velvet-Orchid-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Velvet-Orchid-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
LunaMaid 12B Q6 K GGUF
Vortex5/LunaMaid-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/LunaMaid-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Midnight-Ocean-12B-Q6_K-GGUF
Vortex5/Midnight-Ocean-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Midnight-Ocean-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Midnight-Ocean-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/Noir-Blossom-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:
Noir-Blossom-12B-Q6_K-GGUF
Vortex5/Noir-Blossom-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Noir-Blossom-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Prototype X 12b
.blackforge-shell { max-width: 1000px; margin: 40px auto; padding: 0 14px; background: radial-gradient(circle at 40% 60%, rgba(0,3,0,0.96) 0%, rgba(0,0,0,0.99) 100%); border: 2px solid rgba(0,160,70,0.35); border-radius: 16px; box-shadow: 0 0 60px rgba(0,160,70,0.25), inset 0 0 50px rgba(0,120,50,0.2), 0 0 0 3px rgba(0,100,40,0.25); color: #d0ffd6; font-family: "Inter", "Segoe UI", system-ui, sans-serif; text-align: left; overflow: hidden; } .title-block { margin: 42px 0 26px 0; text-align: center; background: linear-gradient(100deg, rgba(0,50,0,0.7), rgba(0,110,40,0.6), rgba(0,150,100,0.65)); border: 1px solid rgba(0,200,100,0.25); border-radius: 14px; padding: 16px 12px; box-shadow: 0 0 25px rgba(0,160,70,0.35), inset 0 0 25px rgba(0,180,100,0.25); position: relative; overflow: hidden; } .title-block::before { content: ""; position: absolute; top: 0; left: -70%; width: 45%; height: 100%; background: linear-gradient(120deg, rgba(255,255,255,0.15), rgba(255,255,255,0)); transform: skewX(-25deg); animation: beamSlide 8s linear infinite; } @keyframes beamSlide { 0% { left: -70%; } 50% { left: 115%; } 100% { left: -70%; } } .title-block h2 { margin: 0; font-size: 1.3rem; letter-spacing: 2px; text-transform: uppercase; background: linear-gradient(90deg, #42ff90, #96ffb2, #2aff74); -webkit-background-clip: text; -webkit-text-fill-color: transparent; text-shadow: 0 0 12px rgba(70,255,130,0.6), 0 0 24px rgba(50,255,100,0.4); } .title-block:hover h2 { text-shadow: 0 0 14px rgba(120,255,180,0.8), 0 0 28px rgba(90,255,150,0.9); } .blackforge-shell pre { background: rgba(0,5,0,0.9); color: #e9ffed; border-left: 4px solid rgba(0,200,90,0.7); border-radius: 10px; padding: 14px; font-size: 13px; overflow-x: auto; box-shadow: inset 0 0 12px rgba(0,255,120,0.18); } details summary { cursor: pointer; font-weight: 600; color: #b5ffd1; margin: 10px 0; } .blackforge-shell a { color: #7effa9; text-decoration: none; transition: color 0.25s ease, text-shadow 0.25s ease; } .blackforge-shell a:hover { color: #baffd7; text-shadow: 0 0 8px rgba(170,255,200,0.7); } .blackforge-divider { height: 10px; max-width: 900px; margin: 30px auto; background: radial-gradient(circle, rgba(0,255,100,0.25), rgba(0,20,0,0.9)); border-radius: 50%; box-shadow: inset 0 0 30px rgba(0,255,100,0.3); } 01 // Overview Prototype-X-12b is a model merged using a custom flowforge method β it merges KansenSakura-Eclipse-RP-12B and KansenSakura-Radiance-RP-12B , with KansenSakura-Erosion-RP-12B as the base model. 02 // Custom Merge Method It is a directional, coherence-aware merge algorithm that moves a base model along the weighted consensus direction defined by its donors rather than averaging them directly. Each donorβs influence is determined by its relative energy (the magnitude of its weight differences from the base), and the method normalizes and scales these offsets to preserve numerical stability. A small orthogonal adjustment prevents collapse when donors are highly similar, while the strength , trust , and topk parameters control how far and how selectively the merge travels through parameter space. The result is a controlled shift in model behavior that reflects donor characteristics without discarding the base modelβs underlying structure. Show YAML mergemethod: flowforge models: - model: Retreatcost/KansenSakura-Eclipse-RP-12b - model: Retreatcost/KansenSakura-Radiance-RP-12b basemodel: Retreatcost/KansenSakura-Erosion-RP-12b parameters: strength: 0.8 trust: 1.0 dtype: bfloat16 tokenizer: source: Retreatcost/KansenSakura-Erosion-RP-12b Team Mradermacher β Static & imatrix quants DeathGodlike β EXL3 quants Original creators and model authors
Moonlit-Shadow-12B-Q4_K_M-GGUF
Dark Quill 12B Q6 K GGUF
Vortex5/Dark-Quill-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Dark-Quill-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Crimson-Twilight-12B-Q6_K-GGUF
Vortex5/Crimson-Twilight-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Crimson-Twilight-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Sunlit-Shadow-12B-Q6_K-GGUF
Harmony-Bird-12B-Q6_K-GGUF
Vortex5/Harmony-Bird-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Harmony-Bird-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Lunar-Nexus-12B
Poetic-Rune-12B-Q6_K-GGUF
Vortex5/Poetic-Rune-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Poetic-Rune-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Luminous-Shadow-12B-Q6_K-GGUF
Vortex5/Luminous-Shadow-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Luminous-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MS3.2 24B Solar Skies
> Bright minds under boundless skies β where every conversation becomes a sunrise of imagination MS3.2-24B-Solar-Skies merge of pre-trained language models created using MergeKit. It draws upon the intellectual density of The Omega Directive, the expressive prose of Fiery Lynx, and the measured balance of Chaos Skies. π§© Models: - π§ ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0 - π₯ Vortex5/MS3.2-24B-Fiery-Lynx - π Vortex5/MS3.2-24B-Chaos-Skies π Intended Use | Category | Description | |-----------|--------------| | π§ Reflective Dialogue | Ideal for introspective or philosophical discussions, exploring abstract and emotional topics. | | ποΈ Creative Writing | Excels at expressive prose, narrative storytelling, and immersive worldbuilding. | | π§ Analytical Reasoning | Balances logic and creativity for insightful, stylistically nuanced explanations. | | π Character Roleplay | Adapts fluidly to emotional, character-driven interactions and narrative depth. | - π« All original model authors and contributors whose work formed the foundation for this merge.
Dark Quill 12B
MS3.2-24B-Fiery-Lynx-Q4_K_M-GGUF
Poetic-Nexus-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/Poetic-Rune-12B Vortex5/Lunar-Nexus-12B The following YAML configuration was used to produce this model:
Darkest-Grimoire-12B
Crystal-Moon-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: Vortex5/Lunar-Nexus-12B Vortex5/Moondark-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:
MN-14B-Crimson-Veil-Q5_K_S-GGUF
Vortex5/MN-14B-Crimson-Veil-Q5KS-GGUF This model was converted to GGUF format from `Vortex5/MN-14B-Crimson-Veil` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Vermilion-Sage-12B-Q6_K-GGUF
Vortex5/Vermilion-Sage-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Vermilion-Sage-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MegaMoon-Karcher-12B-Q6_K-GGUF
Vortex5/MegaMoon-Karcher-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/MegaMoon-Karcher-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Shadow-Crystal-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Vortex5/Moonlit-Shadow-12B Vortex5/Crystal-Ocean-12B The following YAML configuration was used to produce this model:
Drifting-Shadow-12B-Q6_K-GGUF
Vortex5/Drifting-Shadow-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Drifting-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Crystal-Ocean-12B-Q4_K_M-GGUF
Vortex5/Crystal-Ocean-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Crystal-Ocean-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MS3.2-24B-Chaos-Skies-Q4_K_M-GGUF
Crystal-Ocean-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Epiculous/VioletTwilight-v0.2 as a base. The following models were included in the merge: crestf411/MN-Slush anthracite-org/magnum-v2-12b LatitudeGames/Wayfarer-12B The following YAML configuration was used to produce this model:
Moondark-12B-Q4_K_M-GGUF
Vortex5/Moondark-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Moondark-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
NovaSage-24B-Q4_K_M-GGUF
Amber-Starlight-12B
MS3.2-24B-Chaos-Skies
Moonviolet-12B-Q4_K_M-GGUF
Vortex5/Moonviolet-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Moonviolet-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Moonlit-Shadow-12B-Q6_K-GGUF
Moonbright-12B-Q4_K_M-GGUF
Vortex5/Moonbright-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Moonbright-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MoonMega-12B-Q4_K_M-GGUF
VoidRose-24B-Q4_K_M-GGUF
Lunar-Nexus-12B-Q6_K-GGUF
Vortex5/Lunar-Nexus-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Lunar-Nexus-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Radiant-Shadow-12B
This is a merge of pre-trained language models created using mergekit. πNotes: I had some issues with chatml instruction template, try Mistral V7 works well. This model was merged using the Passthrough merge method. The following models were included in the merge: Retreatcost/KansenSakura-Radiance-RP-12b Vortex5/Lunar-Nexus-12B Vortex5/Shadow-Crystal-12B The following YAML configuration was used to produce this model:
MS3.2-24B-Penumbra-Aether
Sunlit-Shadow-12B
ChaosRose-24B
WittyAthena-24b-Q4_K_M-GGUF
MS3.2-24B-Stellar-Skies-Q4_K_M-GGUF
MS3.2-24B-Astral-Mirage
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using zerofata/MS3.2-PaintedFantasy-24B as a base. The following models were included in the merge: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond Gryphe/Codex-24B-Small-3.2 The following YAML configuration was used to produce this model:
MS3.2-24B-Astral-Revenant
MN-12B-Azure-Veil-Q6_K-GGUF
Vortex5/MN-12B-Azure-Veil-Q6K-GGUF This model was converted to GGUF format from `Vortex5/MN-12B-Azure-Veil` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Drifting-Shadow-12B
Drifting-Shadow-12B This is a merge of pre-trained language models created using mergekit. Merge Details Merge Method This model was merged using the Passthrough merge method. The following models were included in the merge: Vortex5/Noir-Blossom-12B Vortex5/Moonlit-Shadow-12B The following YAML configuration was used to produce this model:
WittyAthena-24b
WittyAthena-24b is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method using arcee-ai/Arcee-Blitz as a base. The following models were included in the merge: Vortex5/Clockwork-Flower-24B TheDrummer/Cydonia-24B-v3 The following YAML configuration was used to produce this model:
ChaosFlowerRP-24B
ChaosFlowerRP-24B is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using trashpanda-org/MS-24B-Instruct-Mullein-v0 as a base. The following models were included in the merge: h34v7/DansXPantheon-RP-Engine-V1.2-24b-Small-Instruct-Ties-Merge OddTheGreat/Apparatus24B The following YAML configuration was used to produce this model:
Violet-Starlight-12B
LuckyRP-24B
Qwen2.5-14B-Styx
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated as a base. The following models were included in the merge: SicariusSicariiStuff/ImpishQWEN14B-1M ReadyArt/Omega-DarkerThe-Final-Directive-14B Sao10K/14B-Qwen2.5-Kunou-v1 v000000/Qwen2.5-Lumen-14B The following YAML configuration was used to produce this model:
Mystic-Rune-v2-12B-Q6_K-GGUF
Vortex5/Mystic-Rune-v2-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Mystic-Rune-v2-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
VoidRose-24B
MN-Mystic-Rune-12B-Q6_K-GGUF
Lunar-Nexus-12B-Q4_K_M-GGUF
Vortex5/Lunar-Nexus-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Lunar-Nexus-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Clockwork-Flower-24B
Clockwork-Flower-24B is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: OddTheGreat/Cogwheel24bV.2 Vortex5/ChaosFlowerRP-24B The following YAML configuration was used to produce this model:
NovaSage-24B
NovaSage-24B is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Vortex5/WittyAthena-24b as a base. The following models were included in the merge: Vortex5/VoidRose-24B Gryphe/Pantheon-RP-1.8-24b-Small-3.1 aixonlab/Eurydice-24b-v3.5 trashpanda-org/MS-24B-Instruct-Mullein-v0 LatitudeGames/Harbinger-24B TheDrummer/Cydonia-24B-v3 The following YAML configuration was used to produce this model:
MS3.2-24B-Omega-Diamond
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1 The following YAML configuration was used to produce this model:
Shadow-Crystal-12B-Q6_K-GGUF
Vortex5/Shadow-Crystal-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Shadow-Crystal-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Moonviolet-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Nitral-AI/Captain-ErisViolet-V0.420-12B Vortex5/Moondark-12B The following YAML configuration was used to produce this model:
MN-Mystic-Rune-12B
Moondark-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: flammenai/Mahou-1.5-mistral-nemo-12B Delta-Vector/Ohashi-NeMo-12B HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 The following YAML configuration was used to produce this model:
MoonMega-12B
Mystic-Rune-v2-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: Vortex5/MN-Mystic-Rune-12B The following YAML configuration was used to produce this model:
MS3.2-24B-Chaos-Mirage-nearswap
This is a merge of pre-trained language models created using mergekit. This model was merged using the NearSwap merge method using Vortex5/MS3.2-24B-Astral-Mirage as a base. The following models were included in the merge: Vortex5/MS3.2-24B-Chaos-Skies The following YAML configuration was used to produce this model:
Moonbright-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 Delta-Vector/Ohashi-NeMo-12B The following YAML configuration was used to produce this model:
Astral-Noctra-12B
Mystic-Rune-v2-12B-Q4_K_M-GGUF
Vortex5/Mystic-Rune-v2-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Mystic-Rune-v2-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Shadow-Crystal-12B-Q4_K_M-GGUF
Vortex5/Shadow-Crystal-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Shadow-Crystal-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Radiant-Shadow-12B-Q4_K_M-GGUF
Vortex5/Radiant-Shadow-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Radiant-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Harmonic-Moon-12B-Q4_K_M-GGUF
Vortex5/Harmonic-Moon-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Harmonic-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
SpicyFlyRP-22B
Gilded-Tempest-12B
Gilded-Tempest-12B is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using elinas/Chronos-Gold-12B-1.0 as a base. The following models were included in the merge: Nitral-AI/Captain-ErisViolet-V0.420-12B FallenMerick/MN-Violet-Lotus-12B The following YAML configuration was used to produce this model:
Chaos-Cydonia-24B
Chaos-Cydonia-24B is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-24B-Instruct-2501 as a base. The following models were included in the merge: Vortex5/ChaosRose-24B TheDrummer/Cydonia-24B-v3 The following YAML configuration was used to produce this model:
Radiant-Shadow-12B-Q6_K-GGUF
Vortex5/Radiant-Shadow-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Radiant-Shadow-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
ChaosFlowerRP-24B-Q4_K_M-GGUF
Vortex5/ChaosFlowerRP-24B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/ChaosFlowerRP-24B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MN-Mystic-Rune-12B-Q4_K_M-GGUF
Crystal-Moon-12B-Q6_K-GGUF
Vortex5/Crystal-Moon-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Crystal-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Crystal-Moon-12B-Q4_K_M-GGUF
Vortex5/Crystal-Moon-12B-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/Crystal-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Harmonic-Moon-12B-Q6_K-GGUF
Vortex5/Harmonic-Moon-12B-Q6K-GGUF This model was converted to GGUF format from `Vortex5/Harmonic-Moon-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Stellar-Umbra-12B
MS3.2-24B-Stellar-Skies
MS3.2-24B-Chaos-Mirage-nearswap-Q4_K_M-GGUF
Vortex5/MS3.2-24B-Chaos-Mirage-nearswap-Q4KM-GGUF This model was converted to GGUF format from `Vortex5/MS3.2-24B-Chaos-Mirage-nearswap` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).