mergekit-community

418 models • 11 total models in database
Sort by:

Slush-ChatWaifu-Rocinante-sunfall

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/mergekit-slerp-uegcctd knifeayumu/Rocinante-12B-v1-nemo-sunfall-v0.6.1-SLERP The following YAML configuration was used to produce this model:

NaNK
987
2

Qwen3-7B-Instruct

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-7B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-Coder-7B-Instruct Qwen/Qwen2.5-Math-7B-Instruct The following YAML configuration was used to produce this model:

NaNK
251
1

Qwen3-1.5B-Instruct

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-1.5B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-Math-1.5B-Instruct Qwen/Qwen2.5-Coder-1.5B-Instruct The following YAML configuration was used to produce this model:

NaNK
149
1

Deepseek-R1-Distill-NSFW-RPv1

NaNK
llama
79
20

mergekit-model_stock-prczfmj

NaNK
63
2

nsfw_merge_test_v4dot1

NaNK
llama
30
1

Alice-12B

NaNK
14
0

mergekit-model_stock-ysywggg

NaNK
llama
8
1

uncensored-mix

NaNK
llama
8
1

Llama-3-DeepSeek-R1-Distill-8B-LewdPlay-Uncensored

NaNK
llama
7
4

nsfw-w-deepseek-r1-retry

NaNK
llama
7
2

mergekit-model_stock-fpfjlqs

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/sexehtimetesting + kik41/lora-type-descriptive-llama-3-8b-v2 as a base. The following models were included in the merge: mergekit-community/sexehtimetesting + vannynakamura/finetunemodelsmedicalAI mergekit-community/sexehtimetesting + Azazelle/Nimue-8B mergekit-community/sexehtimetesting + BeastGokul/Bio-Medical-MultiModal-Llama-3-8B-Finetuned mergekit-community/sexehtimetesting + ResplendentAI/SmartsLlama3 mergekit-community/sexehtimetesting + Azazelle/ANJIR-ADAPTER-128 The following YAML configuration was used to produce this model:

NaNK
llama
7
1

SuperQwen-2.5-1.5B

NaNK
6
2

Llama3.1-1B-THREADRIPPER

NaNK
llama
6
0

HX-Mistral-3B_v0.1

NaNK
5
3

Arisu-12B

NaNK
5
2

Alicer-12B

NaNK
5
1

mergekit-slerp-srinwor

NaNK
5
0

mergekit-ties-cbdfmuk

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using OpenLLM-Ro/RoMistral-7b-Instruct as a base. The following models were included in the merge: mistralai/Mistral-7B-Instruct-v0.3 The following YAML configuration was used to produce this model:

NaNK
5
0

Toppy-Synatra-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the NuSLERP merge method. The following models were included in the merge: Undi95/Toppy-M-7B maywell/Synatra-7B-v0.3-RP The following YAML configuration was used to produce this model:

NaNK
5
0

because_im_bored_nsfw1

NaNK
llama
4
1

BetterGPT2

NaNK
4
0

L3.1-Artemis-e-8B

NaNK
llama
4
0

config_smart_ablit

NaNK
llama
4
0

Qwen2.5-14B-YOYO-DS-V6

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Azure99/Blossom-V6-14B as a base. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B qihoo360/Light-R1-14B-DS The following YAML configuration was used to produce this model:

NaNK
4
0

mergekit-dare_ties-mgtzoms

NaNK
4
0

mergekit-linear-iwfvdmg

NaNK
llama
4
0

Mistral-Small-2501-SCE-Mashup-24B

NaNK
base_model:trashpanda-org/Llama3-24B-Mullein-v1
3
5

QwQ-32B-Preview-Instruct-Coder

NaNK
3
4

good_mix_model_Stock

NaNK
llama
3
2

L3.1-Artemis-c-8B

NaNK
llama
3
1

mergekit-model_stock-injkqri

NaNK
3
1

Llama-3-ThinkRoleplay-DeepSeek-R1-Distill-8B-abliterated

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Azazelle/Llama-3-8B-contaminated-roleplay as a base. The following models were included in the merge: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated The following YAML configuration was used to produce this model:

NaNK
llama
3
1

MN-Sappho-j-12B

NaNK
3
1

MS-RP-whole

NaNK
3
1

MN-Sappho-n2-12B

NaNK
3
1

MN-Anathema-12B

NaNK
3
1

MN-Hekate-Pyrtania-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/MN-Hekate-Limenoskopos-12B as a base. The following models were included in the merge: mergekit-community/MN-Hekate-Nykhia-17B mergekit-community/MN-Hekate-Episkopos-17B mergekit-community/MN-Hekate-Nyktipolos-17B mergekit-community/MN-Hekate-Limenoskopos-17B The following YAML configuration was used to produce this model:

NaNK
3
1

mergekit-slerp-qamquir

NaNK
3
0

mergekit-slerp-hwgrlbs

NaNK
3
0

mergekit-passthrough-zpfenfn

NaNK
llama
3
0

Gemma-2-Ataraxy-ActionGemma-LoRA-merged

NaNK
3
0

mergekit-slerp-aflqaqy

NaNK
3
0

passthru-bored-plus-gguf-me-nsfw2-test

llama
3
0

mergekit-ties-rraxdhv

NaNK
3
0

dsasd

NaNK
base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B
3
0

QwenSpanishR-1.5B

NaNK
3
0

JAJUKA-WEWILLNEVERFORGETYOU-3B

NaNK
llama
3
0

MN-Sappho-n3-12B

NaNK
3
0

llama-3.2-hammered-three

NaNK
llama
3
0

Qwen2.5-32B-qwq-it-slerp2

NaNK
3
0

mergekit-model_stock-rxtwhlc

NaNK
3
0

mergekit-model_stock-lvezkfe

NaNK
3
0

QWQ-Rombos-ties-TEST2

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: rombodawg/Rombos-LLM-V2.5-Qwen-32b Qwen/QwQ-32B-Preview The following YAML configuration was used to produce this model:

NaNK
2
3

nsfw-i-like-this-one-plz-kill-me

llama
2
3

Slush-Lyra-Gutenberg-Bophades

2
3

L3.1-Athena-a-8B

NaNK
llama
2
3

MN-Nyx-Chthonia-12B

NaNK
2
3

MethedUp

NaNK
llama
2
2

nsfw_merge_test_vFFS

NaNK
llama
2
2

hopefully_humanish-rp-nsfw-test-v1

NaNK
llama
2
2

NSFW-FFS-w-hidden-Deepseek-Distill-NSFW

NaNK
llama
2
2

Deepseek-Distill-NSFW-visible-w-NSFW-FFS

NaNK
llama
2
2

MN-Sappho-b-12B

NaNK
2
2

MN-Sappho-n4-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mistralai/Mistral-Nemo-Instruct-2407 as a base. The following models were included in the merge: mergekit-community/MN-Sappho-g3-12B mergekit-community/MN-Sappho-n2-12B mergekit-community/MN-Sappho-n3-12B mergekit-community/MN-Sappho-n-12B mergekit-community/MN-Sappho-j-12B The following YAML configuration was used to produce this model:

NaNK
2
2

Omega-Darker_Slush-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: crestf411/MN-Slush ReadyArt/Omega-DarkerThe-Final-Directive-12B The following YAML configuration was used to produce this model:

NaNK
2
2

mergekit-model_stock-qtseiad

NaNK
2
2

mergekit-ties-vjlpsxw

NaNK
2
1

Fimburs11V3

NaNK
llama
2
1

mergekit-slerp-oztfijl

NaNK
base_model:meta-llama/Meta-Llama-3-8B
2
1

L3.1-Vulca-Umboshima-8B

NaNK
llama
2
1

Moist_Theia_21B

NaNK
2
1

mergekit-ties-ueirogz

NaNK
2
1

mergekit-sce-vjeombg

NaNK
2
1

MN-Sappho-c-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Khetterman/AbominationScience-12B-v4 as a base. The following models were included in the merge: LatitudeGames/Wayfarer-12B mistralai/Mistral-Nemo-Instruct-2407 mergekit-community/MN-Sappho-b-12B mistralai/Mistral-Nemo-Base-2407 inflatebot/MN-12B-Mag-Mell-R1 mergekit-community/MN-Sappho-a-12B The following YAML configuration was used to produce this model:

NaNK
2
1

Tigers-Abliterated-9B

NaNK
2
1

MN-Sappho-f-12B

NaNK
2
1

Mistral-Small-2501-SCE-Mashup-2-24B

NaNK
base_model:trashpanda-org/Llama3-24B-Mullein-v1
2
1

MN-Sappho-k-12B

NaNK
2
1

MN-Sappho-l-12B

NaNK
2
1

Mistral-Small-24B-Merge-V2

NaNK
2
1

L3.1-Athena-f-8B

NaNK
llama
2
1

L3.1-Athena-g-8B

NaNK
llama
2
1

MN-Hekate-Geneteira-12B

NaNK
2
1

Qwen2.5-14B-ties-1M

NaNK
2
1

MN-Hekate-Limenoskopos-12B

NaNK
2
1

MN-Hekate-Noctiluca-12B-v2

NaNK
2
1

mergekit-slerp-vbaesvs

NaNK
2
0

mergekit-slerp-mhsbcqc

NaNK
2
0

mergekit-slerp-gpprpds

NaNK
2
0

mergekit-ties-aspkrwz

NaNK
llama
2
0

mergekit-slerp-rxkhjnf

NaNK
llama
2
0

mergekit-slerp-ieauevl

NaNK
2
0

LLaMa-3-Base-Zeroed-13B

NaNK
llama
2
0

TopEvolution-DPO-32K

NaNK
2
0

TopEvolutionWiz

NaNK
2
0

Qwen2-2B-Dolphin-RepleteCoder

NaNK
2
0

mergekit-slerp-duaqshp

NaNK
2
0

mergekit-ties-liyosfu

NaNK
2
0

grok-13b-chat

NaNK
llama
2
0

mergekit-ties-mtbkpmt

NaNK
2
0

Qwen2.5-32B-Instruct-Coder-Tie

NaNK
2
0

mergekit-ties-duurpfl

NaNK
2
0

test_ArliAI-RPMax_guidance_all_versions_plus_o1-Open-Llama_reflection-llama

NaNK
llama
2
0

mergekit-della_linear-uogzotg

NaNK
llama
2
0

mergekit-della_linear-dbwwdyo

NaNK
llama
2
0

Qwen2.5Minus2-0.5B-Instruct

NaNK
2
0

QwenFocusedCoder2

NaNK
2
0

mergekit-ties-hqqzvmi

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-Coder-0.5B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-0.5B-Instruct The following YAML configuration was used to produce this model:

NaNK
2
0

mergekit-slerp-fmrazcr

NaNK
llama
2
0

mergekit-dare_ties-psqsabe

NaNK
llama
2
0

mergekit-dare_ties-iezesml

NaNK
llama
2
0

mergekit-model_stock-bzcrthr

NaNK
llama
2
0

mergekit-dare_ties-ajgjgea

NaNK
llama
2
0

mergekit-slerp-wduahvh

NaNK
2
0

mergekit-slerp-fgoimpq

NaNK
2
0

mergekit-slerp-dehplhb

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: crestf411/MN-Slush nbeerbower/mistral-nemo-bophades3-12B The following YAML configuration was used to produce this model:

NaNK
2
0

mergekit-model_stock-olgorhm

NaNK
2
0

mergekit-slerp-zbeneng

NaNK
2
0

mergekit-slerp-xeugntu

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/mergekit-sce-xgsvvmh mergekit-community/mergekit-modelstock-hwudfad The following YAML configuration was used to produce this model:

2
0

MS3-RP-half1

NaNK
base_model:trashpanda-org/Llama3-24B-Mullein-v1
2
0

mergekit-slerp-rayqjvs

NaNK
2
0

mergekit-model_stock-izmzpot

NaNK
2
0

mergekit-slerp-ijgjytz

NaNK
2
0

mergekit-ties-asjuuws

NaNK
2
0

Qwen2.5-32B-it-pro

NaNK
2
0

mergekit-slerp-wgdlrrb

NaNK
llama
2
0

Qwen2.5-14B-della-code

NaNK
2
0

Qwen2.5-14B-1M

NaNK
2
0

mergekit-sce-nsexkut

NaNK
2
0

mergekit-dare_ties-psxhlrx

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: Krystalan/DRT-14B netease-youdao/Confucius-o1-14B huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated The following YAML configuration was used to produce this model:

NaNK
2
0

Holgerim-Llama-7b

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: dominguesm/Canarim-7B-Instruct trollek/Holger-7B-v0.1 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

MN-Hekate-Anassa-17B

NaNK
2
0

mergekit-slerp-ryfxivm

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: sometimesanotion/Lamarck-14B-v0.7 sometimesanotion/Qwenvergence-14B-v11 The following YAML configuration was used to produce this model:

NaNK
2
0

mergekit-della-efwskwi

NaNK
2
0

Qwen2.5-7B-fuse-della

NaNK
2
0

mergekit-slerp-znbfpqv

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/mergekit-slerp-pjjpegi mergekit-community/mergekit-slerp-irynmhm The following YAML configuration was used to produce this model:

2
0

mergekit-slerp-qregpbv

2
0

mergekit-passthrough-atuidyj

NaNK
2
0

mergekit-della-brubxsv

NaNK
2
0

mergekit-della-mhapspp

NaNK
2
0

Mistral-rp-24b-karcher

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method using cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition as a base. The following models were included in the merge: Gryphe/Pantheon-RP-1.8-24b-Small-3.1 PocketDoc/Dans-DangerousWinds-V1.1.1-24b ReadyArt/Omega-DarkerThe-Final-Directive-24B huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated trashpanda-org/MS-24B-Instruct-Mullein-v0 The following YAML configuration was used to produce this model:

NaNK
2
0

Rombos-QWQ-ties-TEST

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: rombodawg/Rombos-LLM-V2.5-Qwen-32b Qwen/QwQ-32B-Preview The following YAML configuration was used to produce this model:

NaNK
1
4

Qwen2.5-14B-Coder-Merge

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: rombodawg/Rombos-Coder-V2.5-Qwen-14b Qwen/Qwen2.5-Coder-14B-Instruct Qwen/Qwen2.5-Coder-14B The following YAML configuration was used to produce this model:

NaNK
1
3

MN-Sappho-g-12B

NaNK
1
3

UltraLong-Thinking

NaNK
llama
1
3

Irix-12B_Slush_V2

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: DreadPoor/Irix-12B-ModelStock mergekit-community/Slush-Lyra-Gutenberg-Bophades The following YAML configuration was used to produce this model:

NaNK
1
3

VirtuosoSmall-InstructModelStock

NaNK
1
2

nsfw-merge-v4dot1-w-deepseek-ablit

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/nsfwmergetestv4dot1 stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated The following YAML configuration was used to produce this model:

NaNK
llama
1
2

MN-Sappho-m-12B

NaNK
1
2

mergekit-dare_ties-twfgema

NaNK
1
2

mergekit-slerp-dclolyo

NaNK
1
1

LLaMa-3-8B-First-8-Layers

NaNK
llama
1
1

mergekit-slerp-rfokseh

NaNK
1
1

L3.1-Artemis-d-8B

NaNK
llama
1
1

mergekit-passthrough-dgucanu

NaNK
1
1

sexeh_time_testing

NaNK
llama
1
1

mergekit-dare_ties-ocypetp

NaNK
llama
1
1

L3.1-Artemis-g-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using kromeurus/L3.1-Ablaze-Vulca-v0.1-8B as a base. The following models were included in the merge: mergekit-community/L3-Boshima-a Sao10K/L3-8B-Lunaris-v1 Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

nsfw_merge_testv6

NaNK
llama
1
1

mergekit-passthrough-smmjedo

1
1

qwenben

NaNK
1
1

dolphinllamaseekv2

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Llama-8B cognitivecomputations/Dolphin3.0-Llama3.1-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

mergekit-slerp-bfzxelq

NaNK
1
1

mergekit-slerp-jqcnjsm

NaNK
1
1

Cute_Experiment-8B

NaNK
llama
1
1

MN-Sappho-d-12B

NaNK
1
1

censored-mix

NaNK
llama
1
1

L3.1-Athena-b-8B

NaNK
llama
1
1

L3.1-Athena-j-8B

NaNK
llama
1
1

L3.1-Athena-l-8B

NaNK
llama
1
1

L3.1-Athena-l2-8B

NaNK
llama
1
1

Llama3.3-Grand_Lemonade-70B

NaNK
llama
1
1

nsfw-i-mean-it-plz-kill-me-part2

NaNK
llama
1
1

mergekit-model_stock-tiwlqms

NaNK
1
1

R1-JSON

NaNK
1
1

mergekit-slerp-kxiunve

NaNK
1
0

mergekit-slerp-dieybqi

NaNK
1
0

mergekit-slerp-yebtzzv

NaNK
1
0

mergekit-slerp-gmjodqj

NaNK
1
0

mergekit-slerp-dtieltq

NaNK
1
0

mergekit-slerp-emgmhsf

NaNK
1
0

mergekit-slerp-zwkhacc

NaNK
1
0

mergekit-slerp-wahogcx

NaNK
1
0

mergekit-slerp-uwupwsk

NaNK
1
0

mergekit-slerp-bnhzjvv

NaNK
1
0

mergekit-slerp-zzizhry

NaNK
1
0

mergekit-slerp-jeyctse

NaNK
1
0

llama-world

NaNK
llama
1
0

mergekit-slerp-ueqsixf

NaNK
1
0

mergekit-slerp-qzxjuip

NaNK
1
0

mergekit-slerp-kxeioog

NaNK
1
0

dolphin-mistral-instruct-7b

NaNK
1
0

mergekit-slerp-aywerbb

NaNK
1
0

mergekit-slerp-flctqsu

NaNK
1
0

mergekit-slerp-ynceepa

NaNK
1
0

mergekit-slerp-fodinzo

NaNK
1
0

mergekit-dare_ties-ymiqjtz

NaNK
1
0

mergekit-ties-cmdmayc

NaNK
1
0

mergekit-slerp-ojqhjfr

NaNK
llama
1
0

mergekit-passthrough-anunwkh

NaNK
llama
1
0

mergekit-passthrough-dmirwnd

NaNK
llama
1
0

Llama3-13B-ku

NaNK
llama
1
0

Hermes-2-Pro-Llama-3-13B

NaNK
llama
1
0

mergekit-ties-ujwvugo

NaNK
llama
1
0

LLaMa-3-8B-First-4-Layers

NaNK
llama
1
0

L3-Inverted-Rainbow-RP-v2-OVA-8B

NaNK
llama
1
0

Qwen2-2B-Dolphin-Hercules

NaNK
1
0

Qwen2-2B-RepleteCoder-Hercules

NaNK
1
0

Berry-Spark-7B

NaNK
1
0

mergekit-slerp-zbuqguo

NaNK
llama
1
0

SonnyD

NaNK
llama
1
0

mergekit-ties-gxhsjzj

NaNK
llama
1
0

mergekit-della-kstssvv

NaNK
llama
1
0

mergekit-ties-oysoxmc

NaNK
llama
1
0

mergekit-slerp-epibiuy

NaNK
llama
1
0

mergekit-della_linear-iwescit

NaNK
llama
1
0

mergekit-slerp-wphccbj

NaNK
1
0

mergekit-slerp-qtidaqf

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/UnslopNemo-12B-v4.1 anthracite-org/magnum-v4-12b The following YAML configuration was used to produce this model:

NaNK
1
0

Qwen2.5-32B-Instruct-Coder-Merge-Tool-use

NaNK
1
0

Qwen2.5-7B-Instruct-Coder-Merge-Tool-use

NaNK
1
0

mergekit-della_linear-hvzpnws

NaNK
1
0

mergekit-linear-ugyqudc

NaNK
1
0

mergekit-ties-htjjeox

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using chuanli11/Llama-3.2-3B-Instruct-uncensored as a base. The following models were included in the merge: bunnycore/Llama-3.2-3B-Mix The following YAML configuration was used to produce this model:

NaNK
llama
1
0

mergekit-della_linear-vpjjtsa

NaNK
llama
1
0

final_test_ArliAI-RPMax_guidance_all_versions_plus_top_3_models

This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using mergekit-community/testArliAI-RPMaxguidanceallversionspluso1-Open-Llamareflection-llama as a base. The following models were included in the merge: mergekit-community/mergekit-dellalinear-uogzotg Undi95/Llama3-Unholy-8B-OAS Undi95/Meta-Llama-3.1-8B-Claude vicgalle/Humanish-Roleplay-Llama-3.1-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

final_test_2_original_recipe

NaNK
llama
1
0

final_test_3_original_recipe_more_reasoning

NaNK
llama
1
0

final_test_4_original_old_recipe

NaNK
llama
1
0

final_test_4_v2_original_old_recipe_humanish_base

NaNK
llama
1
0

mergekit-della-zgowfmf

NaNK
1
0

MT-Gen3-gemma-2-9B-Flip

NaNK
1
0

mergekit-slerp-bcumecp

NaNK
1
0

mergekit-model_stock-azgztvm

NaNK
1
0

QwenSelfMerge

NaNK
1
0

Qwen-ACTUALLY-Zeroed

NaNK
1
0

test_4_smarts222

NaNK
llama
1
0

mergekit-dare_ties-uyuzvch

NaNK
1
0

mergekit-dare_ties-nlzuacx

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using unsloth/Llama-3.3-70B-Instruct as a base. The following models were included in the merge: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

diabolic6045_ELN-AOC-CAIN

NaNK
llama
1
0

mergekit-dare_ties-addnpep

NaNK
llama
1
0

mergekit-task_arithmetic-qjeuqjw

NaNK
1
0

mergekit-model_stock-uyjyafe

NaNK
1
0

CodeMix-JPID-3B-Llama3.2

NaNK
llama
1
0

mergekit-model_stock-rqvzadm

NaNK
1
0

mergekit-model_stock-pjdbpjk

NaNK
1
0

mergekit-ties-olhmfit

NaNK
1
0

mergekit-ties-azrgvqf

NaNK
1
0

Llama-3-LewdPlay-DeepSeek-R1-Distill-8B-abliterated

NaNK
llama
1
0

mergekit-slerp-slxaccf

NaNK
1
0

mergekit-slerp-dgmqjeb

NaNK
llama
1
0

nsfw-another-sce-test-lol1

NaNK
llama
1
0

mergekit-sce-vzszowb

NaNK
1
0

mergekit-sce-iwzxvqr

NaNK
base_model:trashpanda-org/Llama3-24B-Mullein-v1
1
0

mergekit-slerp-ljgqjtg

1
0

mergekit-slerp-ldgylwv

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: arcee-ai/sec-mistral-7b-instruct-1.6-epoch cognitivecomputations/dolphin-2.8-mistral-7b-v02 The following YAML configuration was used to produce this model:

NaNK
1
0

mergekit-slerp-mirtnuv

NaNK
1
0

Mistral-Small-24B-Merge

NaNK
1
0

r1-0.1776-pocket-version

NaNK
llama
1
0

NSFW-FFS-w-hidden-Deepseek-Distill-NSFW-Redux

llama
1
0

mergekit-model_stock-kvunitr

NaNK
1
0

nsfw-yet-another-test-might-be-bad

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using DreadPoor/Noxis-8B-LINEAR + grimjim/Llama-3-Instruct-abliteration-LoRA-8B as a base. The following models were included in the merge: v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + Azazelle/Llama-3-8B-Abomination-LORA v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/health v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + moetezsa/Llama3instructonwikibio v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + kik41/lora-type-descriptive-llama-3-8b-v2 v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + DreadPoor/Everything-COT-8B-r128-LoRA v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/professionalpsychology v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + kik41/lora-length-long-llama-3-8b-v2 v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + eeeebbb2/3aff0ea7-4262-4abb-97b1-1879f340d32e v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/humansexuality v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/formallogic v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/anatomy v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/biology v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + ResplendentAI/SmartsLlama3 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

allarma-3.2-hammered

NaNK
llama
1
0

mergekit-slerp-zxgekkl

NaNK
llama
1
0

Qwen2.5-14B-stock-v2

NaNK
1
0

mergekit-slerp-dlsejld

NaNK
1
0

mergekit-model_stock-adqzxpt

NaNK
llama
1
0

UnslopNemo-Mag-Mell_T-1

NaNK
1
0

Qwen2.5-14B-della-v2-dpo

This is a merge of pre-trained language models created using mergekit. This model was merged using the DELLA merge method using arcee-ai/Virtuoso-Small-v2 as a base. The following models were included in the merge: mergekit-community/Qwen2.5-14B-dpo-it Qwen/Qwen2.5-14B-Instruct-1M The following YAML configuration was used to produce this model:

NaNK
1
0

QwQ-slerp1

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Qwen/QwQ-32B Qwen/Qwen2.5-32B-Instruct The following YAML configuration was used to produce this model:

NaNK
1
0

qwq-slerp-3

NaNK
1
0

Qwen2.5-test-14b-it

NaNK
1
0

MN-Hekate-Nykhia-17B

NaNK
1
0

Hermes-3-Remix-L3.2-3b

NaNK
llama
1
0

MN-Hekate-Daidalos-17B

NaNK
1
0

mergekit-dare_ties-lmociuf

NaNK
1
0

mergekit-dare_ties-oqggofa

NaNK
1
0

Qwen2.5-7B-ties

NaNK
1
0

MN-Hekate-Episkopos-17B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/MN-Hekate-Damnomeneia-17B as a base. The following models were included in the merge: nbeerbower/mistral-nemo-bophades-12B mistralai/Mistral-Nemo-Base-2407 ReadyArt/Forgotten-Abomination-12B-v4.0 Nitral-AI/Captain-ErisViolet-GRPO-v0.420 The following YAML configuration was used to produce this model:

NaNK
1
0

mergekit-task_arithmetic-yxycruu

NaNK
1
0

mergekit-karcher-jhklzwv

NaNK
1
0

Qwen2.5-32B-gokgok-step1

NaNK
1
0

Qwen2.5-32B-gokgok-step2

NaNK
1
0

ignore_L3.x-Monk-70B

NaNK
llama
1
0

mergekit-dare_ties-zrurbjl

NaNK
1
0

mergekit-dare_ties-afgxxsc

NaNK
1
0

mergekit-slerp-hkqkozo

NaNK
1
0

mergekit-model_stock-jlodpmg

NaNK
1
0

mergekit-dare_ties-fikucxa

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using mrfakename/mistral-small-3.1-24b-instruct-2503-hf as a base. The following models were included in the merge: ReadyArt/Forgotten-Safeword-24B-v4.0 ReadyArt/Broken-Tutu-24B Sorawiz/MistralCreative-24B-Chat The following YAML configuration was used to produce this model:

NaNK
1
0

mergekit-dare_ties-xejqqxa

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Sorawiz/MistralCreative-24B-Chat as a base. The following models were included in the merge: darkc0de/BlackXorDolphTronGOAT ReadyArt/Forgotten-Safeword-24B-v4.0 aixonlab/Eurydice-24b-v2 The following YAML configuration was used to produce this model:

NaNK
1
0

mergekit-ties-lhhtrme

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using mistralai/Mistral-7B-v0.1 as a base. The following models were included in the merge: mistralai/Mistral-7B-Instruct-v0.2 BioMistral/BioMistral-7B The following YAML configuration was used to produce this model:

NaNK
1
0

Slush-ChatWaifu-Chronos

0
5

Deutscher-Pantheon-12B

NaNK
0
3

mergekit-model_stock-pvcszfh

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base. The following models were included in the merge: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0 peakji/steiner-32b-preview The following YAML configuration was used to produce this model:

NaNK
0
3

ohnoes_now_nsfw

NaNK
llama
0
3

nsfw_plz_gguf_me

NaNK
llama
0
3

MN-Hecate-Chthonia-12B

NaNK
0
3

mergekit-slerp-lvhhlmq

NaNK
0
2

TopEvolution

NaNK
0
2

Qwen2-1.5B-RHSD

NaNK
0
2

L3-Boshima-a

NaNK
llama
0
2

Roci-Maxx

NaNK
0
2

L3.1-Pneuma-8B-v1

NaNK
llama
0
2

GutenBerg_Nyxora_magnum-v4-27b

This is a merge of pre-trained language models created using mergekit. This model was merged using the linear merge method using anthracite-org/magnum-v4-27b as a base. The following models were included in the merge: DazzlingXeno/GutenBergNyxora The following YAML configuration was used to produce this model:

NaNK
0
2

UnslopNemo-v4.1-Magnum-v4-12B

NaNK
0
2

mergekit-della_linear-sxmadrj

NaNK
llama
0
2

Qwen2.5-14B-Merge

NaNK
0
2

hopefully_humanish-rp-nsfw-test-v-retry

NaNK
llama
0
2

R1-ImpishMind-8B

NaNK
llama
0
2

Slush-ChatWaifu-Rocinante-sunfall-Wayfarer

0
2

Slush-Sunfall-Rocinante-GGLD-12B

NaNK
0
2

Slush-FallMix-12B

NaNK
0
2

24B-MS-PRO-V0.01

NaNK
0
2

2xPIMPY3xBAPE-OPP5

NaNK
0
2

mergekit-slerp-kvkcnhb

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: elinas/Chronos-Gold-12B-1.0 crestf411/MN-Slush The following YAML configuration was used to produce this model:

NaNK
0
2

MN-Sappho-a-12B

NaNK
0
2

MN-Sappho-g2-12B

NaNK
0
2

MN-Chthonia-12B

NaNK
0
2

nsfw-i-mean-it-plz-kill-me

NaNK
llama
0
2

MN-Hekate-Panopaia-12B

NaNK
0
2

Qwen2.5-14B-dpo-it-ties

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: mergekit-community/Qwen2.5-14B-dpo-it The following YAML configuration was used to produce this model:

NaNK
0
2

mergekit-sce-sudfgqi

NaNK
0
2

Irix-12B_Slush

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: crestf411/MN-Slush DreadPoor/Irix-12B-ModelStock The following YAML configuration was used to produce this model:

NaNK
0
2

mergekit-ties-jnhzatj

NaNK
llama
0
1

mergekit-slerp-rijglhb

NaNK
0
1

mergekit-slerp-ebgdloh

NaNK
0
1

L3.1-Romes-Ninomos-Maxxing

llama
0
1

Berry-Spark-7B-Fix

NaNK
0
1

L3.1-Vulca-Umboshima-8B-bf16

NaNK
llama
0
1

L3.1-Artemis-a-8B

NaNK
llama
0
1

L3.1-Artemis-b-8B

NaNK
llama
0
1

L3.1-Boshima-b

NaNK
llama
0
1

L3.1-Boshima-b-FIX

NaNK
llama
0
1

L3.1-Artemis-dcd-12B

NaNK
llama
0
1

L3.1-Artemis-faustus-8B

NaNK
llama
0
1

L3.1-15B-EtherealMaid-t0.0001-alpha

NaNK
llama
0
1

SthenoMix3.3

NaNK
llama
0
1

NM-StarUnleashed

NaNK
0
1

L3.1-Artemis-e2-8B

NaNK
llama
0
1

Qwen2.5-Mavapy-b-7B

NaNK
0
1

Q2.5-14B-Evalternagar

NaNK
0
1

qwen2.5-11B-Mzy

NaNK
0
1

mergekit-slerp-xnqoryq

NaNK
0
1

mergekit-slerp-hayztti

NaNK
llama
0
1

LLaMa-3.1-Instruct-Zeroed-13B

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: unsloth/Meta-Llama-3.1-8B-Instruct The following YAML configuration was used to produce this model:

NaNK
llama
0
1

oculus-alpha

NaNK
0
1

mergekit-slerp-kdchnjo

NaNK
0
1

mergekit-model_stock-rxbbxes

NaNK
llama
0
1

another_nsfw_test

NaNK
llama
0
1

ol_faithful_nsfw_32bit

NaNK
llama
0
1

mergekit-task_arithmetic-haaopre

NaNK
llama
0
1

nsfw-merged-test

NaNK
llama
0
1

nsfw_merge_testv2

NaNK
llama
0
1

mergekit-passthrough-ywynqau

NaNK
llama
0
1

L3.1-Orion-a-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using P0x0/Epos-8b as a base. The following models were included in the merge: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B Sao10K/L3-8B-Lunaris-v1 mergekit-community/L3-Boshima-a The following YAML configuration was used to produce this model:

NaNK
llama
0
1

because-im-bored-nsfw2-linear

This is a merge of pre-trained language models created using mergekit. This model was merged using the linear merge method. The following models were included in the merge: mergekit-community/becauseimborednsfw1 + Azazelle/Llama-3-LongStory-LORA mergekit-community/becauseimborednsfw1 + kik41/lora-type-descriptive-llama-3-8b-v2 mergekit-community/becauseimborednsfw1 + kik41/lora-length-long-llama-3-8b-v2 The following YAML configuration was used to produce this model:

NaNK
llama
0
1

llasa-3b-upscaled

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: srinivasbilla/llasa-3b The following YAML configuration was used to produce this model:

NaNK
llama
0
1

mergekit-linear-enaoxvi

NaNK
llama
0
1

Llama3.1-16B-Upscaled

NaNK
llama
0
1

mergekit-model_stock-nvgaatl

NaNK
0
1

mergekit-sce-azzpiqv

NaNK
llama
0
1

DeepVeo-R1-B

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Qwen/Qwen2.5-1.5B-Instruct as a base. The following models were included in the merge: Alfitaria/Q25-1.5B-VeoLu deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B The following YAML configuration was used to produce this model:

NaNK
0
1

L3.1-Artemis-h-8B

NaNK
llama
0
1

ChatWaifu-Wayfarer-Sunfall

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/ChatWaifu-Wayfarer-12B crestf411/nemo-sunfall-v0.6.1 The following YAML configuration was used to produce this model:

NaNK
0
1

mergekit-slerp-madwjrw

NaNK
0
1

nsfw-i-hate-my-life-v2

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/L3.1-Artemis-h-8B mergekit-community/nsfw-i-hate-my-life-v1 The following YAML configuration was used to produce this model:

NaNK
llama
0
1

AngelSlayer-Slush-12B

NaNK
0
1

Llama-3-LewdPlay-evo-DeepSeek-R1-Distill-8B

NaNK
llama
0
1

CW-Stock

NaNK
0
1

Slush-FallMix-Test_V2_12B

NaNK
0
1

Slush-FallMix-Test_V5_12B

NaNK
0
1

mergekit-ties-vbqvheo

NaNK
0
1

Slush-FallMix-Test_V6c_12B

NaNK
0
1

Slush-FallMix-Fire_Edition_1.0-12B

NaNK
0
1

Llama3.1-8B-NormalMix

NaNK
llama
0
1

mergekit-della-nwsztat

NaNK
0
1

MN-Sappho-e-12B

NaNK
0
1

nsfw-sce-test-2

NaNK
llama
0
1

Llama-3.3-Super-Mini-Instruct

NaNK
llama
0
1

MN-Sappho-jlcj-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: mergekit-community/MN-Sappho-l-12B mergekit-community/MN-Sappho-j-12B mergekit-community/MN-Sappho-c-12B The following YAML configuration was used to produce this model:

NaNK
0
1

MN-Sappho-g3-12B

NaNK
0
1

L3.1-Athena-c-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/L3.1-Athena-b-8B as a base. The following models were included in the merge: Skywork/Skywork-o1-Open-Llama-3.1-8B DavidAU/L3-Dark-Planet-8B DavidAU/DeepSeek-BlackRoot-R1-Distill-Llama-3.1-8B DavidAU/L3-Dark-Planet-8B-V2-Eight-Orbs-Of-Power DavidAU/L3.1-RP-Hero-BigTalker-8B deepseek-ai/DeepSeek-R1-Distill-Llama-8B Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B MathGenie/MathCoder2-Llama-3-8B mergekit-community/L3.1-Orion-a-8B Skywork/Skywork-Critic-Llama-3.1-8B meta-llama/Llama-3.1-8B mergekit-community/L3.1-Artemis-h-8B Sao10K/L3-8B-Lunaris-v1 mergekit-community/L3.1-Athena-a-8B The following YAML configuration was used to produce this model:

NaNK
llama
0
1

L3.1-Athena-d-8B

NaNK
llama
0
1

L3.1-Athena-e-8B

NaNK
llama
0
1

L3.1-Athena-h-8B

NaNK
llama
0
1

L3.1-Athena-i-8B

NaNK
llama
0
1

L3.1-Athena-k-8B

NaNK
llama
0
1

MN-Sappho-n-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mistralai/Mistral-Nemo-Base-2407 as a base. The following models were included in the merge: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 inflatebot/MN-12B-Mag-Mell-R1 LatitudeGames/Wayfarer-12B mistralai/Mistral-Nemo-Instruct-2407 Khetterman/AbominationScience-12B-v4 Nitral-Archive/Diogenes-12B yuyouyu/Mistral-Nemo-BD-RP mergekit-community/MN-Sappho-j-12B DavidAU/MN-Dark-Planet-TITAN-12B ToastyPigeon/Sto-vo-kor-12B Khetterman/DarkAtom-12B-v3 nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B mergekit-community/MN-Sappho-g3-12B PygmalionAI/Eleusis-12B PocketDoc/Dans-PersonalityEngine-V1.1.0-12b The following YAML configuration was used to produce this model:

NaNK
0
1

Llama3.3-Grand-Skibidi-70B

NaNK
llama
0
1

MN-Ephemeros-12B

NaNK
0
1

mergekit-della_linear-gznziez

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear DELLA merge method using IlyaGusev/saiganemo12b as a base. The following models were included in the merge: MarinaraSpaghetti/NemoMix-Unleashed-12B TheDrummer/Rocinante-12B-v1.1 Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 The following YAML configuration was used to produce this model:

NaNK
0
1

nsfw-sce-test-2-redux

NaNK
llama
0
1

Deepseek-R1-Distill-NSFW-RP-vRedux-Proper

NaNK
llama
0
1

L3.1-Athena-l3-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using deepseek-ai/DeepSeek-R1-Distill-Llama-8B as a base. The following models were included in the merge: mergekit-community/L3.1-Athena-l2-8B Skywork/Skywork-Critic-Llama-3.1-8B Skywork/Skywork-o1-Open-Llama-3.1-8B mergekit-community/L3.1-Athena-j-8B + kik41/lora-type-descriptive-llama-3-8b-v2 MathGenie/MathCoder2-Llama-3-8B meta-llama/Llama-3.1-8B mergekit-community/L3.1-Athena-d-8B + kik41/lora-length-long-llama-3-8b-v2 DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B + kik41/lora-length-long-llama-3-8b-v2 mergekit-community/L3.1-Athena-i-8B + kik41/lora-length-long-llama-3-8b-v2 kromeurus/L3.1-Clouded-Uchtave-v0.1-8B + kik41/lora-type-descriptive-llama-3-8b-v2 NousResearch/DeepHermes-3-Llama-3-8B-Preview nothingiisreal/L3.1-8B-Celeste-V1.5 + vincentyandex/lorallama3chunkednovelbs128 DavidAU/L3.1-RP-Hero-BigTalker-8B + vincentyandex/lorallama3chunkednovelbs128 AtlaAI/Selene-1-Mini-Llama-3.1-8B normster/RealGuardrails-Llama3.1-8B-SFT meta-llama/Llama-3.1-8B-Instruct The following YAML configuration was used to produce this model:

NaNK
llama
0
1

L3.1-Athena-l4-8B

NaNK
llama
0
1

L3.1-Athena-m-8B

NaNK
llama
0
1

nsfw-back-to-model-stock

NaNK
llama
0
1

L3.1-Athena-n-8B

NaNK
llama
0
1

MN-Hekate-Kleidoukhos-12B

NaNK
0
1

MN-Hekate-Enodia-12B

NaNK
0
1

MN-Hekate-Ekklesia-12B

NaNK
0
1

MN-Hekate-Deichteira-12B

NaNK
0
1

Qwen2.5-14B-dpo-it-della

NaNK
0
1

DeeperHermes3_R1_D_L3_8b

NaNK
llama
0
1

Panth-L3-Blackroot-Nephra-MK.VI-8B

NaNK
llama
0
1

Qwen2.5-14B-stock-v3

NaNK
0
1

Qwen2.5-14B-della

This is a merge of pre-trained language models created using mergekit. This model was merged using the DELLA merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: Qwen/Qwen2.5-14B-instruct The following YAML configuration was used to produce this model:

NaNK
0
1

MN-Hekate-Damnomeneia-17B

NaNK
0
1

Qwen2.5-14B-ties

NaNK
0
1

nsfw_ts_too_late_im_burnt_to_a_crisp

llama
0
1

MN-Hekate-Limenoskopos-17B

NaNK
0
1

QwQ-openhands-Code-32B

NaNK
0
1

openhands-Nemotron-32B-karcher

NaNK
0
1

openhands-Nemotron-32B-karcher-300

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: all-hands/openhands-lm-32b-v0.1 nvidia/OpenMath-Nemotron-32B The following YAML configuration was used to produce this model:

NaNK
0
1

MN-Hekate-Noctiluca-12B

NaNK
0
1

Phi-4-reasoning-Line-14b-karcher

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method using huihui-ai/phi-4-abliterated as a base. The following models were included in the merge: AXCXEPT/phi-4-deepseek-R1K-RL-EZO microsoft/Phi-4-reasoning-plus microsoft/Phi-4-reasoning The following YAML configuration was used to produce this model:

NaNK
0
1

MN-Hekate-Pandamateira-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/MN-Hekate-Noctiluca-12B-v2 as a base. The following models were included in the merge: Lambent/Gilded-Arsenic-12B mergekit-community/MN-Sappho-j-12B nbeerbower/mistral-nemo-gutenberg-12B-v4 nbeerbower/mistral-nemo-bophades-12B mistralai/Mistral-Nemo-Base-2407 mergekit-community/MN-Hekate-Limenoskopos-17B nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B-v2 The following YAML configuration was used to produce this model:

NaNK
0
1

MN-Hekate-Tetrakephalos-12B

NaNK
0
1