FlareRebellion

13 models • 2 total models in database
Sort by:

WeirdCompound V1.6 24b

This is a merge of pre-trained language models created using mergekit. This is a multi-stage merge. There's little method to my madness and I just stopped when I arrived at something that I liked. Starting point was DepravedCartographer-v1.0-24b with slight changes. v1.1 /intermediate/model/B: replaced anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF with anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML v1.2 /intermediate/model/B: replaced anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML with anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only for default tokenizer config. v1.3 /intermediate/model/A: replaced TheDrummer/Cydonia-24B-v3 with TheDrummer/Cydonia-24B-v4 /intermediate/model/A: replaced Doctor-Shotgun/MS3.1-24B-Magnum-Diamond with Doctor-Shotgun/MS3.2-24B-Magnum-Diamond /intermediate/model/A: replaced Delta-Vector/Austral-24B-Winton with Delta-Vector/MS3.2-Austral-Winton v1.4 /intermediate/model/C: change recipe to use Doctor-Shotgun/MS3.2-24B-Magnum-Diamond and Delta-Vector/MS3.2-Austral-Winton didn't particularly care for v1.4. IMHO v1.3 was better /intermediate/model/A: replaced Doctor-Shotgun/MS3.2-24B-Magnum-Diamond with zerofata/MS3.2-PaintedFantasy-24B /intermediate/model/C: change recipe to use PocketDoc/Dans-PersonalityEngine-V1.3.0-24b and zerofata/MS3.2-PaintedFantasy-24B v1.6 /intermediate/model/A: updated Cydonia to TheDrummer/Cydonia-24B-v4.1 /intermediate/model/A: updated MS3.2-PaintedFantasy-24B to zerofata/MS3.2-PaintedFantasy-v2-24B /intermediate/model/A: removed Delta-Vector/MS3.2-Austral-Winton /intermediate/model/A: added Doctor-Shotgun/MS3.2-24B-Magnum-Diamond and CrucibleLab/M3.2-24B-Loki-V1.3 /intermediate/model/B: changed weight to 0.45 /intermediate/model/C: replaced zerofata/MS3.2-PaintedFantasy-24B with CrucibleLab/M3.2-24B-Loki-V1.3 and fiddled with weights This model was merged using the Model Stock merge method using TheDrummer/Cydonia-24B-v4 as a base. This model was merged using the SLERP merge method. This model was merged using the NuSLERP merge method using /intermediate/model/B as a base. TheDrummer/Cydonia-24B-v4.1 aixonlab/Eurydice-24b-v3.5 PocketDoc/Dans-PersonalityEngine-V1.3.0-24b zerofata/MS3.2-PaintedFantasy-v2-24B CrucibleLab/M3.2-24B-Loki-V1.3 Doctor-Shotgun/MS3.2-24B-Magnum-Diamond anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only /intermediate/model/A /intermediate/model/B /intermediate/model/C The following YAML configuration was used to produce this model:

NaNK
395
13

WeirdCompound V1.7 24b

This is a merge of pre-trained language models created using mergekit. This is a multi-stage merge. There's little method to my madness and I just stopped when I arrived at something that I liked. Starting point was DepravedCartographer-v1.0-24b with slight changes. v1.1 /intermediate/model/B: replaced anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF with anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML v1.2 /intermediate/model/B: replaced anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML with anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only for default tokenizer config. v1.3 /intermediate/model/A: replaced TheDrummer/Cydonia-24B-v3 with TheDrummer/Cydonia-24B-v4 /intermediate/model/A: replaced Doctor-Shotgun/MS3.1-24B-Magnum-Diamond with Doctor-Shotgun/MS3.2-24B-Magnum-Diamond /intermediate/model/A: replaced Delta-Vector/Austral-24B-Winton with Delta-Vector/MS3.2-Austral-Winton v1.4 /intermediate/model/C: change recipe to use Doctor-Shotgun/MS3.2-24B-Magnum-Diamond and Delta-Vector/MS3.2-Austral-Winton didn't particularly care for v1.4. IMHO v1.3 was better /intermediate/model/A: replaced Doctor-Shotgun/MS3.2-24B-Magnum-Diamond with zerofata/MS3.2-PaintedFantasy-24B /intermediate/model/C: change recipe to use PocketDoc/Dans-PersonalityEngine-V1.3.0-24b and zerofata/MS3.2-PaintedFantasy-24B v1.6 /intermediate/model/A: updated Cydonia to TheDrummer/Cydonia-24B-v4.1 /intermediate/model/A: updated MS3.2-PaintedFantasy-24B to zerofata/MS3.2-PaintedFantasy-v2-24B /intermediate/model/A: removed Delta-Vector/MS3.2-Austral-Winton /intermediate/model/A: added Doctor-Shotgun/MS3.2-24B-Magnum-Diamond and CrucibleLab/M3.2-24B-Loki-V1.3 /intermediate/model/B: changed weight to 0.45 /intermediate/model/C: replaced zerofata/MS3.2-PaintedFantasy-24B with CrucibleLab/M3.2-24B-Loki-V1.3 and fiddled with weights Quick disclaimer: A new version doesn't automatically mean 'better'. If you're happy with v1.6 or v1.2, they won't go away. This one has a different vibe than v1.6, but it takes me weeks to get a feel for the prose, so here it is. Shoutout to @TheDrummer for the never-ending supply of great finetunes. /intermediate/model/A: updated Cydonia to TheDrummer/Cydonia-24B-v4.2.0 /intermediate/model/A: replaced Doctor-Shotgun/MS3.2-24B-Magnum-Diamond with Delta-Vector/MS3.2-Austral-Winton This model was merged using the Model Stock merge method using TheDrummer/Cydonia-24B-v4 as a base. This model was merged using the SLERP merge method. This model was merged using the NuSLERP merge method using /intermediate/model/B as a base. TheDrummer/Cydonia-24B-v4.2.0 aixonlab/Eurydice-24b-v3.5 PocketDoc/Dans-PersonalityEngine-V1.3.0-24b zerofata/MS3.2-PaintedFantasy-v2-24B CrucibleLab/M3.2-24B-Loki-V1.3 Delta-Vector/Austral-24B-Winton anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only /intermediate/model/A /intermediate/model/B /intermediate/model/C The following YAML configuration was used to produce this model:

NaNK
212
12

WeirdCompound-v1.5-24b

NaNK
14
3

WeirdCompound-v1.1-24b

NaNK
6
1

WeirdCompound-v1.0-24b

NaNK
6
0

WeirdCompound-v1.2-24b

NaNK
5
6

WeirdCompound-v1.3-24b

NaNK
4
0

WeirdCompound-v1.4-24b

This is a merge of pre-trained language models created using mergekit. This is a multi-stage merge. There's little method to my madness and I just stopped when I arrived at something that I liked. Starting point was DepravedCartographer-v1.0-24b with slight changes. v1.1 /intermediate/model/B: replaced anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF with anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML v1.2 /intermediate/model/B: replaced anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML with anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only for default tokenizer config. v1.3 /intermediate/model/A: replaced TheDrummer/Cydonia-24B-v3 with TheDrummer/Cydonia-24B-v4 /intermediate/model/A: replaced Doctor-Shotgun/MS3.1-24B-Magnum-Diamond with Doctor-Shotgun/MS3.2-24B-Magnum-Diamond /intermediate/model/A: replaced Delta-Vector/Austral-24B-Winton with Delta-Vector/MS3.2-Austral-Winton v1.4 /intermediate/model/B: change recipe to use Doctor-Shotgun/MS3.2-24B-Magnum-Diamond and Delta-Vector/MS3.2-Austral-Winton This model was merged using the Model Stock merge method using TheDrummer/Cydonia-24B-v4 as a base. This model was merged using the SLERP merge method. This model was merged using the NuSLERP merge method using /intermediate/model/B as a base. The following models were included in the merge: Delta-Vector/MS3.2-Austral-Winton Doctor-Shotgun/MS3.2-24B-Magnum-Diamond aixonlab/Eurydice-24b-v3.5 PocketDoc/Dans-PersonalityEngine-V1.3.0-24b anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only /intermediate/model/A /intermediate/model/B /intermediate/model/C The following YAML configuration was used to produce this model:

NaNK
4
0

DarkHazard-v2.1-24b

This is a merge of pre-trained language models created using mergekit. This merge was inspired by Yoesph/Haphazard-v1.1-24b yvvki/Erotophobia-24B-v1.1 v2.1 Updated Dans-PersonalityEngine to PocketDoc/Dans-PersonalityEngine-V1.3.0-24b Updated Eurydice to aixonlab/Eurydice-24b-v3.5 v2.0 Major version bump because of base model change: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition swapped TheDrummer/Cydonia-24B-v2.1 with ReadyArt/Forgotten-Safeword-24B-v4.0 (I've been doing some tests with LatitudeGames/Harbinger-24B but it just seemed to introduce positivity bias to my test scenarios, so it stays out for now) v1.2 replaced Yoesph/Haphazard-v1.1-24b with model: TheDrummer/Cydonia-24B-v2.1 replaced ReadyArt/Safeword-Abomination-of-Omega-Darker-GaslightThe-Final-Forgotten-Transgression-24B with ReadyArt/Broken-Tutu-24B This model was merged using the Model Stock merge method using cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition as a base. The following models were included in the merge: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b aixonlab/Eurydice-24b-v3.5 ReadyArt/Forgotten-Safeword-24B-v4.0 ReadyArt/Broken-Tutu-24B The following YAML configuration was used to produce this model:

NaNK
2
8

DepravedCartographer-v1.0-24b

NaNK
0
3

DarkHazard-v1.2-24b

NaNK
0
2

DarkHazard-v1.3-24b

This is a merge of pre-trained language models created using mergekit. This merge was inspired by Yoesph/Haphazard-v1.1-24b v1.2 replaced Yoesph/Haphazard-v1.1-24b with model: TheDrummer/Cydonia-24B-v2.1 replaced ReadyArt/Safeword-Abomination-of-Omega-Darker-GaslightThe-Final-Forgotten-Transgression-24B with ReadyArt/Broken-Tutu-24B This model was merged using the Model Stock merge method using arcee-ai/Arcee-Blitz as a base. The following models were included in the merge: aixonlab/Eurydice-24b-v3 TheDrummer/Cydonia-24B-v2.1 PocketDoc/Dans-PersonalityEngine-V1.2.0-24b ReadyArt/Broken-Tutu-24B The following YAML configuration was used to produce this model:

NaNK
0
2

DarkHazard-v2.0-24b

This is a merge of pre-trained language models created using mergekit. This merge was inspired by Yoesph/Haphazard-v1.1-24b yvvki/Erotophobia-24B-v1.1 v2.0 Major version bump because of base model change: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition swapped TheDrummer/Cydonia-24B-v2.1 with ReadyArt/Forgotten-Safeword-24B-v4.0 (I've been doing some tests with LatitudeGames/Harbinger-24B but it just seemed to introduce positivity bias to my test scenarios, so it stays out for now) v1.2 replaced Yoesph/Haphazard-v1.1-24b with model: TheDrummer/Cydonia-24B-v2.1 replaced ReadyArt/Safeword-Abomination-of-Omega-Darker-GaslightThe-Final-Forgotten-Transgression-24B with ReadyArt/Broken-Tutu-24B This model was merged using the Model Stock merge method using cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition as a base. The following models were included in the merge: ReadyArt/Forgotten-Safeword-24B-v4.0 aixonlab/Eurydice-24b-v3 ReadyArt/Broken-Tutu-24B PocketDoc/Dans-PersonalityEngine-V1.2.0-24b The following YAML configuration was used to produce this model:

NaNK
0
2