Nexesenex

147 models • 46 total models in database
Sort by:

Google_Gemma-2-9b-it_iMat_Custom_Quant_Stategies-GGUF

NaNK
189
0

TomGrc_FusionNet_7Bx2_MoE_v0.1-iMat.GGUF

NaNK
154
2

abacusai_Smaug-Yi-34B-v0.1-iMat.GGUF

NaNK
85
12

Meta_Llama-3.1-8b-it_iMat_Custom_Quant_Stategies-GGUF

NaNK
license:llama3.1
81
2

Codellama-2-7b-Miniguanaco-Mistral-GGUF

NaNK
license:llama2
65
3

Llama 3.X 70b Smarteaz V1

The Teaz series is my third attempt at making merges, this time on L3.x 70b, after the L3.2 3b Costume and Kermes series. This time, the goal was to make a smart model with a low perplexity, in accordance to the principles of the Kermes series, but with a merge of 3 merged models like on the Costume series. Huihui's abliterated models were used: - Llama 3.3 70b as the pivot of the first/main model. - Nemotron 3.1 70b and Deepseek R1 Distill 70b as the pillars of the main model, and the interlaced pivots/pillar of the 2nd and 3rd models. - and Tulu 3 70b as a second pillar of the 2nd and 3rd models. Bingo again. I hit 3.45 ppl512 wikieng, 62+ or ARC-C, and 82+ on ARC-E. Absolute top of the class for L3.x 70b, like Kermes is for L3 3.2 3b. No cheating, no contaminating, just the wonderful MergeKit model-stock merge technique leveraged to a new level (compared to what I already saw being done, at least). Next projects will involve that model as the "smarts pillar/Block" of further merges, aimed at any use case. I think that most models can be tweaked the same way, with triple stock merges interlacing intruct finetunes and base finetunes. - This, gaining overall "intelligence" and "quality" at the cost of a bit of its initial instructions, flavor and "personality". Edit : the mothodology I use is actually partly rediscovered hot water. - Mixing (finetuned) base and (finetuned) instructs, - and using 3 models (a base, 2 sidekicks), have been described as optimal for Merge-Stock by some enthusiasts already. The new thing is to leverage this into a tree of merges with interlaced combinations. That's the natural developpement of the 2 aforementioned "rules". The adventure continues with DobermanV1, a Hermes flavored Dobby on Smarteaz abliterated steroids (very good at being "in character") : - Nexesenex/Llama3.x70bDobermanV1 : https://huggingface.co/Nexesenex/Llama3.x70bDobermanV1 (less than 3.40 ppl 512 wiki-eng, -0.07 compared to SmarteazV1) NemesisV1.1 (ex Negames), a Hermes flavored Negative Llama on Smarteaz abliterated steroids - (More stiff and less creative than Doberman, though. Note : A mistake corrected : Hermes lorablated replace the vanilla version in Nemesis V1.1) : - https://huggingface.co/Nexesenex/Llama3.x70bNemesisV1.1 (less than.. 3.35 ppl 512 wiki-eng, -0.05 compared to DobermanV1) EvasionV1 (ex Hermeva), a Hermes flavored Eva01 on Smarteaz abliterated steroids (the most creative) : - https://huggingface.co/Nexesenex/Llama3.x70bEvasionV1 (less than 3.40 ppl 512 wiki-eng, -0.02 compared to DobermanV1) TrinityV1, a merge of Evasion as base, Doberman and NegaTessTease to include a bit of Tess (to be tested) : - https://huggingface.co/Nexesenex/Llama3.x70bTrinityV1 (less than 3.40 ppl 512 wiki-eng, -0.03 compared to DobermanV1) Alas, I don't have under hand a Tess R1 Limerick lorablated. On the other hand, Mlabonne lorablated Hermes 3 70b Lorablated, and.. - I found 2 other models to make a "Hermes Block" and boost the creativity of the next revisions of my models, and not only the smarts. - Here it comes : https://huggingface.co/Nexesenex/Llama3.x70bHarpiesV1 I (and many of us mergers, I believe) would need the following models abliterated to improve our merges, if Huihui-ai or someone could help : - https://huggingface.co/SicariusSicariiStuff/NegativeLLAMA70B - https://huggingface.co/SentientAGI/Dobby-Unhinged-Llama-3.3-70B I also tried to Lorablatize L3.1 70b Tess R1 Limerick and L3.1 70b Calme 2.3, but I'm not able to do so successfully (if someone could do that, it would be fantastic!) - https://huggingface.co/migtissera/Tess-R1-Limerick-Llama-3.1-70B - https://huggingface.co/MaziyarPanahi/calme-2.3-llama3.1-70b - The Lora : https://huggingface.co/mlabonne/Llama-3-70B-Instruct-abliterated-LORA - The yaml I used: Kudos go to the model authors, and to the Arcee / MergeKit folks, as well as to HF hosting the MergeKit App. Also a big-up to SteelSkull, observing him cooking Nevoria decided me to try to make some merges by myself. And to all those inspiring finetuners who give time, sometimes their money, a good time and some inspiration to others by tuning models. First : On the Kostume series started on the 11/02/0205 I tried to make a triple stock merge of 3 intermediary stock merges of a dozen of model or so. This, to see if I could pile up their abilities. - Not bad, but nothing special about it, it's a bit hard for me to judge at 3b. Second : On the Kermes series started the day after, I defined a simpler approach: - Perplexity is the main constraint. Usual L3.2 3b finetunes are around 10.5-11 ppl512wikieng, Hermes is around 9.5. - I also measure in French and Serbian to observe the variances. - Arc Challenge and Easy are the second constraint to judge its basic logics. - Usual L3.2 3b finetunes hit 40 and 60-65 respectively, Hermes3 hits 47+ and 70+. - Lack of censorship. I always keep in mind to pick models compatible with that AMAP. - This, may it be through the picked models' abliteration or the datasets they use. - And of course, the test, both In Kobold/Croco.CPP (spamming very offensive requests), and in ST (a 10k prompt with a big lorebook). Kermes series are basically stock merges on the top of anothers. - The goal was to maintain as much the qualities of the models used, so I stay on 1+2 models for the first merge, and 1+2 for the second as well. And bingo. Perplexity goes down still, ARC remain stable, it's quite unhinged still, and.. quite coherent, event at 10k+ context. GGUF iMatrix quantizations (Thanks Mradermacher!) : This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Nexesenex/Llama3.x70bSmarteaz0.1 as a base. The following models were included in the merge: Nexesenex/Llama3.x70bSmarteaz0.2R1 Nexesenex/Llama3.x70bSmarteaz0.2NMT The following YAML configuration was used to produce this model:

NaNK
llama
55
6

Llama_3.x_70b_Evasion_V1

NaNK
llama
54
1

brucethemoose_Yi-34B-200K-DARE-megamerge-v8-iMat.GGUF

NaNK
52
4

bhenrym14_airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GGUF

NaNK
49
2

vicuna-33b-v1.3-GGUF

NaNK
47
0

Llama_3.x_70b_Doberman_V1

NaNK
llama
41
1

airoboros-33b-gpt4-2.0-GGUF

NaNK
41
0

airoboros-33b-gpt4-m2.0-GGUF

NaNK
36
0

bhenrym14_airophin-v2-13b-PI-8k-iMat.GGUF

NaNK
31
0

Llama_3.2_1b_RandomLego_RP_R1_0.1

License: llama3.2 Library Name: transformers Tags:

NaNK
llama
27
2

Airoboros-c34b-2.2.1-Mistral-GGUF

NaNK
license:llama2
26
2

airoboros-c34b-2.2.1-GGUF

NaNK
license:llama2
26
0

chargoddard_llama-2-34b-uncode-iMat.GGUF

NaNK
19
0

Llama_3.2_1b_RandomLego_RP_R1_0.1-GGUF

NaNK
llama-cpp
19
0

brucethemoose_Yi-34B-200K-DARE-merge-v7-iMat.GGUF

NaNK
18
1

jondurbin_bagel-7b-v0.4-iMat.GGUF

NaNK
18
0

cloudyu_Mixtral_34Bx2_MoE_60B-iMat.GGUF

NaNK
17
1

bhenrym14_airoboros-3_1-yi-34b-200k-iMat.GGUF

NaNK
17
0

TomGrc_FusionNet_34Bx2_MoE_v0.1-iMat.GGUF

NaNK
16
5

Airoboros-33b-3.1.2-GGUF

NaNK
15
0

brucethemoose_Yi-34b-Capybara-200K-Fixed-Temp-iMat.GGUF

NaNK
12
0

Llama_3.2_3b_KermesPink_V3.6-GGUF

Nexesenex/Llama3.23bKermesPinkV3.6-F16-GGUF This model was converted to GGUF format from `Nexesenex/Llama3.23bKermesPinkV3.6` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
0

Llama_3.1_8b_Smarteaz_V1.01

License: llama3.1 Library Name: transformers Tags:

NaNK
llama
6
3

Llama_3.2_3b_Kermes_v2

Base model: Nexesenex/Llama 3.2 3B Kermes 0.2 bf16, cognitivecomputations/Dolphin3.0-Llama3.2-3B.

NaNK
llama
6
2

Llama_3.1_8b_DoberWild_v2.03

Base model: SentientAGI Dobby Mini Unhinged Llama 3.1 8B, Nexesenex Llama 3.1 8B Smarteaz V1.01.

NaNK
llama
5
4

SanjiWatsuki_SUS-Wizard-Yi-34B-iMat.GGUF

NaNK
5
0

Dolphin3.0-Llama3.1-1B-abliterated

License: llama3.1 Base model: cognitivecomputations/Dolphin3.0-Llama3.2-1B.

NaNK
llama
5
0

Nemotron_W_4b_MagLight_0.1

Library name: transformers, tags: mergekit.

NaNK
llama
4
3

Llama_3.1_8b_DeepDive_3_Prev_v1.0

License: llama3.1, Library Name: transformers, Tags:

NaNK
llama
4
3

Llama_3.2_3b_Kermes_v2.1

Base model: cognitivecomputations/Dolphin3.0-Llama3.2-3B, SaisExperiments/Evil-Alpaca-3B-L3.2.

NaNK
llama
4
2

Qwen_2.5_3b_Smarteaz_0.01a

Library name: transformers, tags: mergekit.

NaNK
4
1

Llama_3.x_70b_Tess_Dolphin_128K_v1.2

NaNK
llama
4
1

fblgit_UNA-34BeagleSimpleMath-32K-v1-iMat.GGUF

NaNK
4
0

one-man-army_UNA-34Beagles-32K-v1-iMat.GGUF

NaNK
4
0

Cgato_Thespis-Yi-34b-v0.7-iMat.GGUF

NaNK
4
0

cloudyu_Mixtral_7Bx2_MoE_13B-iMat.GGUF

NaNK
4
0

Llama_3.2_1b_Syneridol_0.2

License: llama3.2 Library Name: transformers Tags:

NaNK
llama
4
0

Llama_3.2_1b_Syneridol_0.2-GGUF

NaNK
llama-cpp
4
0

Llama_3.2_1b_AquaSyn_0.11

Library name: transformers, tags: mergekit.

NaNK
llama
4
0

Llama_3.1_8b_Dolermed_V1.01

License: llama3.1 Library Name: transformers Tags:

NaNK
llama
3
4

Llama_3.1_8b_Smarteaz_0.2_R1

License: llama3.1 Library name: transformers Tags:

NaNK
llama
3
3

Llama_3.1_8b_DoberWild_v2.02

Base model includes Nexesenex Llama 3.1 8B Smarteaz V1.01 and Nexesenex Llama 3.1 8B Hermedive R1 V1.01.

NaNK
llama
3
2

Llama_3.1_8b_DodoWild_v2.02

Base model includes Nexesenex Llama 3.1 8B Dolermed R1 V1.01 and Nexesenex Llama 3.1 8B Smarteaz V1.01.

NaNK
llama
3
2

Llama_3.1_8b_Hermedive_R1_V1.03

Base model: meditsolutions/Llama-3.1-MedIT-SUN-8B, huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated.

NaNK
llama
3
2

Llama_3.x_70b_Hexagon_Purple_V2

NaNK
llama
3
2

LaDameBlanche-v2-95b-iMat-CQ.GGUF

NaNK
3
1

Llama_3.1_8b_Smarteaz_0.21_R1

NaNK
llama
3
1

Llama_3.1_8b_Smarteaz_0.21_SN

NaNK
llama
3
1

Llama_3.1_8b_Typhoon_v1.03

Base model: akjindal53244/Llama-3.1-Storm-8B, Nexesenex/Llama_3.1_8b_Dolermed_R1_V1.03.

NaNK
llama
3
1

Llama_3.1_8b_DodoWild_v2.10

Base model: Nexesenex/Llama 3.1 8B Dolerstormed V1.04, SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B.

NaNK
llama
3
1

Llama_3.x_70b_Hexagon_Blue_V1

NaNK
llama
3
1

orca_mini_v9_6_1B-Instruct-GGUF

NaNK
llama-cpp
3
0

Llama_3.2_1b_AquaSyn_0.1

License: llama3.2, Library Name: transformers, Tags:

NaNK
llama
3
0

pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL

License: llama3.2 Base model: pankajmathur/orca_mini_v9_6_1B-Instruct

NaNK
llama
3
0

Llama_3.x_70b_UnfusedV06-Genelemo_fusion_v2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B as a base. The following models were included in the merge: zerofata/L3.3-GeneticLemonade-Unleashed-70B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

Llama_3.1_8b_Hermedive_V1.01

License: llama3.1 Library Name: transformers Tags:

NaNK
llama
2
3

Llama_3.1_8b_Medusa_v1.01

License: llama3.1, Library name: transformers, Tags:

NaNK
llama
2
3

Llama_3.2_3b_SmartiCoatz_0.1b

NaNK
llama
2
2

Llama_3.1_8b_Smarteaz_0.1b

NaNK
llama
2
2

Llama_3.3_70b_DarkHorse

Dark coloration L3.3 merge, to be included in my merges. Can also be tried as a standalone to have a darker Llama Experience, but I didn't take the time. Edit : I took the time, and it meets its purpose. - It's average on the basic metrics (smarts, perplexity), but it's not woke and unhinged indeed. - The model is not abliterated, though. It has refusals on the usual point-blank questions. - I will play with it more, because it has potential. My note : 3/5 as a standalone. 4/5 as a merge brick. Warning : this model can be brutal and vulgar, more than most of my previous merges. - PPL512 WikiText Eng : 3.66 (average ++) - ARC-C : 55.85 (average) - ARC-E : 77.72 (average) This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using SicariusSicariiStuff/NegativeLLAMA70B as a base. The following models were included in the merge: LatitudeGames/Wayfarer-Large-70B-Llama-3.3 SentientAGI/Dobby-Unhinged-Llama-3.3-70B The following YAML configuration was used to produce this model:

NaNK
llama
2
2

Mistral-7B-Instruct-v0.2-2x7B-MoE-6.0bpw-h6-exl2

NaNK
license:apache-2.0
2
1

Llama_3.1_70b_CreHearTess_V1

NaNK
llama
2
1

Llama_3.2_1b_OpenTree_R1_0.1

License: llama3.2, Library Name: transformers, Tags:

NaNK
llama
2
1

Llama_3.1_8b_Hermedash_R1_V1.04

Base model: akjindal53244/Llama-3.1-Storm-8B, meditsolutions/Llama-3.1-MedIT-SUN-8B.

NaNK
llama
2
1

Llama_3.1_8b_Stormeder_v1.04

Base model: meditsolutions/Llama-3.1-MedIT-SUN-8B, huihui-ai/DeepHermes-3-Llama-3-8B-Preview-abliterated.

NaNK
llama
2
1

Llama_3.x_70b_SmarTricks_V1.01

Slightly unhinged version of Smarteaz. A solid model for basically everything. This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Nexesenex/Llama3.x70bSmarTricks0.11 as a base. The following models were included in the merge: Nexesenex/Llama3.x70bSmarTricks0.21R1 Nexesenex/Llama3.x70bSmarTricks0.21NMT The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Llama_3.x_70b_Triads_V3

NaNK
llama
2
1

Llama_3.3_70b_Evalseuses_v1.0

NaNK
llama
2
1

Llama_3.x_70b_Hexagon_Blue_V3

NaNK
llama
2
1

Cgato_Thespis-Yi-34b-DPO-v0.7-iMat.GGUF

NaNK
2
0

yunconglong_Mixtral_7Bx2_MoE_13B_DPO-iMat.GGUF

NaNK
2
0

Llama-3.2-1B-Instruct-Open-R1-GRPO-GGUF

Nexesenex/Llama-3.2-1B-Instruct-Open-R1-GRPO-GGUF This model was converted to GGUF format from `zztheaven/Llama-3.2-1B-Instruct-Open-R1-GRPO` using llama.cpp's fork Croco.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. -> necessary to use Croco. Step 1: Clone llama.cpp from GitHub. -> necessary to use Croco. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Llama_3.2_1b_Sydonia_0.1

License: llama3.2, Library Name: transformers, Tags:

NaNK
llama
2
0

Llama_3.2_1b_Synopsys_0.11

Library name: transformers, tags: mergekit

NaNK
llama
2
0

Llama_3.1_8b_Dolerstormed_V1.04

Base model: Nexesenex/Llama 3.1 8B Hermedash R1 V1.04, Nexesenex/Llama 3.1 8B Dolermed R1 V1.03.

NaNK
llama
2
0

Llama_3.2_1b_OrcaSun_V1

Base model: Nexesenex Pankajmathur Orca Mini V9 6 1B Instruct Abliterated LPL, Meditsolutions Llama 3.2 SUN 1B chat.

NaNK
llama
2
0

Llama_3.x_70b_Erasmus_V1.11

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using LatitudeGames/Wayfarer-Large-70B-Llama-3.3 as a base. The following models were included in the merge: mlabonne/Hermes-3-Llama-3.1-70B-lorablated Nexesenex/Llama3.x70bSmarteazV1 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Llama_3.x_70b_SmarTracks_V1.01

NaNK
llama
2
0

Llama_3.x_70b_Nemdohertess_v2.0

NaNK
llama
2
0

Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear DELLA merge method using huihui-ai/Llama-3.3-70B-Instruct-abliterated as a base. The following models were included in the merge: aaditya/Llama3-OpenBioLLM-70B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Llama_3.x_70b_SmarTricks_v1.50

NaNK
llama
2
0

Llama_3.x_70b_FLDx2-L3.3_abliterated_fusion_norm

NaNK
llama
2
0

Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using hitachi-nlp/Llama-3.1-70B-FLDx2 as a base. The following models were included in the merge: migtissera/Tess-3-Llama-3.1-70B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Llama_3.1_70b_FLDx2-Tess3_fusion_v2

NaNK
llama
2
0

Llama_3.3_70b_Negative_Wayfarer_fusion_v2

NaNK
llama
2
0

Llama_3.x_70b_Legion_Electra_fusion_v2

NaNK
llama
2
0

Llama_3.1_8b_DeepDive_3_R1_Prev_v1.0

License: llama3.1, Library name: transformers, Tags:

NaNK
llama
1
4

Llama_3.1_8b_DodoWild_v2.03

Base model: Nexesenex/Llama 3.1 8b Dolermed R1 V1.03, Nexesenex/Llama 3.1 8b Smarteaz V1.01.

NaNK
llama
1
4

Llama_3.2_3b_Kermes_v1

License: llama3.2, Library name: transformers, Tags:

NaNK
llama
1
3

Nemotron_W_4b_Halo_0.1

Library name: transformers, tags: mergekit.

NaNK
llama
1
3

Llama_3.1_8b_Hermedive_R1_V1.01

License: llama3.1 Library name: transformers Tags:

NaNK
llama
1
3

Llama_3.x_8b_Smarteaz_0.1a

NaNK
llama
1
2

Llama_3.1_8b_DobHerWild_R1_v1.1

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B as a base. The following models were included in the merge: Nexesenex/Llama3.18bSmarteaz0.2R1 Nexesenex/Llama3.18bDeepDive3Prevv1.0 The following YAML configuration was used to produce this model:

NaNK
llama
1
2

Llama_3.1_8b_DodoWild_v2.01

License: llama3.1, Library name: transformers, Tags:

NaNK
llama
1
2

Llama_3.1_8b_Dolermed_R1_V1.01

Base model: meditsolutions/Llama-3.1-MedIT-SUN-8B, huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated.

NaNK
llama
1
2

Llama_3.1_8b_Dolermed_R1_V1.03

Base model: huihui-ai DeepHermes 3 Llama 3 8B Preview abliterated, huihui-ai Dolphin 3.0 Llama 3.1 8B abliterated.

NaNK
llama
1
2

Llama_3.x_70b_Dolmen_v1.2

NaNK
llama
1
2

Gemma-3-4b_X-Ray-Abli_Linear_v1.01

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method. The following models were included in the merge: SicariusSicariiStuff/X-RayAlpha mlabonne/gemma-3-4b-it-abliterated The following YAML configuration was used to produce this model:

NaNK
1
2

mrcuddle_DarkHermes3-Llama3.2-3B-Instruct

NaNK
llama
1
1

Llama_3.1_70b_Hearts_V1

NaNK
llama
1
1

Llama_3.2_1b_Dolto_0.1

License: llama3.2, Library Name: transformers, Tags:

NaNK
llama
1
1

Llama_3.x_70b_L3.3_Athene_128K_v1.02

NaNK
llama
1
1

Llama_3.x_70b_L3.3_UltraMedical_128K_v1.02

NaNK
llama
1
1

mrcuddle_Dark-Hermes3-Llama3.2-3B

NaNK
llama
1
0

Llama_3.x_70b_Doberman_V1.1

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using SentientAGI/Dobby-Unhinged-Llama-3.3-70B as a base. The following models were included in the merge: mlabonne/Hermes-3-Llama-3.1-70B-lorablated Nexesenex/Llama3.x70bSmarteazV1 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

mrcuddle_Tiny-DarkLlama3.2-1B-Instruct-v0.2

NaNK
llama
1
0

Llama_3.2_1b_Synwave_0.1

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using Nexesenex/Llama3.21bSynopsys0.1 as a base. The following models were included in the merge: Nexesenex/Llama3.21bAquaSyn0.1 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Llama_3.2_1b_Odyssea_Escalation_0.0-GGUF

NaNK
llama-cpp
1
0

Llama_3.2_1b_Odyssea_Escalation_0.2-GGUF

NaNK
llama-cpp
1
0

Llama_3.2_1b_Sydonia_0.1-GGUF

NaNK
1
0

Llama_3.x_70b_Nethesis_V1.1

NaNK
llama
1
0

Llama_3.1_8b_Mediver_V1.01

License: llama3.1 Library Name: transformers Tags:

NaNK
llama
1
0

Llama_3.x_70b_Tess_OpenBioLLM_128K_v1.0

NaNK
llama
1
0

Llama_3.x_70b_Tess_FeelTheAGI-Wizard-L3_128K_v1.0

NaNK
llama
1
0

Llama_3.x_70b_SmarTrident_v1.02

NaNK
llama
1
0

Llama_3.x_70b_L3.3-Nemotron_abliterated_fusion

NaNK
llama
1
0

Llama_3.x_70b_L3.3-DoppelGutenberg_abliterated_fusion

NaNK
llama
1
0

Llama_3.x_70b_SmarTricks_v1.70

NaNK
llama
1
0

Llama_3.2_3b_Kermes_0.20

NaNK
llama
0
3

Llama_3.x_70b_Nemesis_V1.1

NaNK
llama
0
3

Llama_3.1_8b_DobHerWild_R1_v1.1R

License: llama3.1 Library name: transformers Tags:

NaNK
llama
0
3

Llama_3.1_8b_DoberWild_v2.01

License: llama3.1 Library Name: transformers Tags:

NaNK
llama
0
3

Llama_3.x_70b_Hexagon_Purple_V1

After a lot of "lego merges" to experiment, let's start a basket merge series! Base is the third version of Smarteaz, Smartracks, in which the R1 model is itself a merge between R1, R1 without chinese censorship, and R1 Fallen Llama. That base has shown itself excellent to empower any model thrown at it. Nemotron and Tulu complete the mix. My 5 favorite L3.3 (Negative Llama, EVA, Dobby, Fallen Llama ofc and Wayfarer) are included in submerges, starting with the well doted Permiscious Prophecy (Including a bit of Sao10K's Euryale 2.2 through the 70Blivion model). Hermes and Tess are also included in submerges, in their abliterated version. Hermes has also its Gutemberg Doppel version. Some abliterated or uncensored L3 are also wrapped in, like Lumitron Abliterated (including some NeverSleep work) or Creative Llama. Benchs are traded for creativity in this merge, so : - PPL Wikitext Eng 512 : 3.54 (good) - ARC-C : 59.20 (good) - ARC-E : 80.70 (good also) This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Nexesenex/Llama3.x70bSmarTracksV1.01 as a base. - The "Smart base" of the model, a 3 levels merge-stock mix of Llama 3.3 abliterated finetuned (the root), Deepseek R1 Distill based Fallen Llama, Nemotron and Tulu. The following models were included in the merge: SentientAGI/Dobby-Unhinged-Llama-3.3-70B - For its.. unhinged "personality traits". Black-Ink-Guild/PerniciousProphecy70B - A balanced healed merge-stock steering with Eva (creativity), Negative Llama (debiasing), L3.1 Oblivion (general intelligence), Open-Bio (anatomy and medicine). LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - To darken the model and the RP scenarios. TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - To highlight and consolidate the R1 capabilities and spice-up/darken the model. Nexesenex/Llama3.170bTearDropsV1.11 - A legacy 3.1 merge, led by Tess R1, including the Hermes based Gutemberg Doppel and an uncensored creative finetune. The following YAML configuration was used to produce this model:

NaNK
llama
0
3

Llama_3.1_8b_DobHerWild_R1_v1.0

NaNK
llama
0
2

Llama_3.1_8b_DobHerLeashed_R1_v1.0

NaNK
llama
0
2

Llama_3.2_3b_PlayMess_0.1

NaNK
llama
0
2

Llama_3.x_70b_Trojka_V2

NaNK
llama
0
2

Codellama-2-7b-Miniguanaco-Mistral

NaNK
llama
0
1

Airoboros-c34b-2.2.1-Mistral

NaNK
llama
0
1

Llama_3.2_3b_SmartiPantz_0.21

NaNK
llama
0
1

Llama_3.2_3b_SmartiHatz_0.1b

NaNK
llama
0
1

Llama_3.2_3b_Kostume_v1

NaNK
llama
0
1

Llama_3.x_70b_Nemesis_V1

NaNK
llama
0
1

Llama_3.1_70b_Harpies_V1

NaNK
llama
0
1

Llama_3.1_70b_Hostess_V1

NaNK
llama
0
1

meditsolutions_Llama-3.2-SUN-1B-Instruct

NaNK
llama
0
1

Llama_3.x_70b_Nemeslices_V1.4

NaNK
llama
0
1

Llama_3.1_8b_Smarteaz_0.11a

NaNK
llama
0
1

Llama_3.3_70b_DeepSeek_R1_Dropable_V1.01

NaNK
llama
0
1

Llama_3.x_70b_PentEva_FreeEra_V1.11

NaNK
llama
0
1

Llama_3.x_70b_Hexagon_Pink_V1

Changes from Hexagon Purple V2 : - Electra becomes base and lead model. - ReadyAct Forgotten Safeword enters and go second to unhinge the model. A bet, hence Hexagon Pink. - Smarteaz goes out, I will make a new version of my "smart merge" soon enough and Electra will take over the smarts for now. - Priestess becomes HighPriestess, Lumitron is back within it. What stays : - DoppelGanger R1 stays, to keep reinforcing the R1 skills and bring Dobby's personnality and more of Wayfarer. - Gutemberg Doppel stays, for Hermes' smarts and writing skills. - Tess stays, as the perplexity dropper. ARC-C : 58.85 (average+) ARC-E : 82.65 (very good) PPL 512 Wikitext Eng : 3.28 (very good) This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Steelskull/L3.3-Electra-R1-70b as a base. The following models were included in the merge: migtissera/Tess-3-Llama-3.1-70B Nexesenex/Llama3.170bHighPriestessR1V1 nbeerbower/Llama3.1-Gutenberg-Doppel-70B NexesMess/Llama3.370bDoppelGangerR1 Strangedove/ReadyArtForgotten-Safeword-70B-3.6-EmbedFix The following YAML configuration was used to produce this model:

NaNK
llama
0
1

Llama_3.x_70b_Electra-Legion_fusion_v2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using Steelskull/L3.3-Electra-R1-70b as a base. The following models were included in the merge: Tarek07/Legion-V2.1-LLaMa-70B The following YAML configuration was used to produce this model:

NaNK
llama
0
1