mergekit-community
Slush-ChatWaifu-Rocinante-sunfall
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/mergekit-slerp-uegcctd knifeayumu/Rocinante-12B-v1-nemo-sunfall-v0.6.1-SLERP The following YAML configuration was used to produce this model:
Qwen3-7B-Instruct
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-7B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-Coder-7B-Instruct Qwen/Qwen2.5-Math-7B-Instruct The following YAML configuration was used to produce this model:
Qwen3-1.5B-Instruct
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-1.5B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-Math-1.5B-Instruct Qwen/Qwen2.5-Coder-1.5B-Instruct The following YAML configuration was used to produce this model:
Deepseek-R1-Distill-NSFW-RPv1
mergekit-model_stock-prczfmj
nsfw_merge_test_v4dot1
Alice-12B
mergekit-model_stock-ysywggg
uncensored-mix
Llama-3-DeepSeek-R1-Distill-8B-LewdPlay-Uncensored
nsfw-w-deepseek-r1-retry
mergekit-model_stock-fpfjlqs
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/sexehtimetesting + kik41/lora-type-descriptive-llama-3-8b-v2 as a base. The following models were included in the merge: mergekit-community/sexehtimetesting + vannynakamura/finetunemodelsmedicalAI mergekit-community/sexehtimetesting + Azazelle/Nimue-8B mergekit-community/sexehtimetesting + BeastGokul/Bio-Medical-MultiModal-Llama-3-8B-Finetuned mergekit-community/sexehtimetesting + ResplendentAI/SmartsLlama3 mergekit-community/sexehtimetesting + Azazelle/ANJIR-ADAPTER-128 The following YAML configuration was used to produce this model:
SuperQwen-2.5-1.5B
Llama3.1-1B-THREADRIPPER
HX-Mistral-3B_v0.1
Arisu-12B
Alicer-12B
mergekit-slerp-srinwor
mergekit-ties-cbdfmuk
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using OpenLLM-Ro/RoMistral-7b-Instruct as a base. The following models were included in the merge: mistralai/Mistral-7B-Instruct-v0.3 The following YAML configuration was used to produce this model:
Toppy-Synatra-RP
This is a merge of pre-trained language models created using mergekit. This model was merged using the NuSLERP merge method. The following models were included in the merge: Undi95/Toppy-M-7B maywell/Synatra-7B-v0.3-RP The following YAML configuration was used to produce this model:
because_im_bored_nsfw1
BetterGPT2
L3.1-Artemis-e-8B
config_smart_ablit
Qwen2.5-14B-YOYO-DS-V6
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Azure99/Blossom-V6-14B as a base. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B qihoo360/Light-R1-14B-DS The following YAML configuration was used to produce this model:
mergekit-dare_ties-mgtzoms
mergekit-linear-iwfvdmg
Mistral-Small-2501-SCE-Mashup-24B
QwQ-32B-Preview-Instruct-Coder
good_mix_model_Stock
L3.1-Artemis-c-8B
mergekit-model_stock-injkqri
Llama-3-ThinkRoleplay-DeepSeek-R1-Distill-8B-abliterated
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Azazelle/Llama-3-8B-contaminated-roleplay as a base. The following models were included in the merge: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated The following YAML configuration was used to produce this model:
MN-Sappho-j-12B
MS-RP-whole
MN-Sappho-n2-12B
MN-Anathema-12B
MN-Hekate-Pyrtania-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/MN-Hekate-Limenoskopos-12B as a base. The following models were included in the merge: mergekit-community/MN-Hekate-Nykhia-17B mergekit-community/MN-Hekate-Episkopos-17B mergekit-community/MN-Hekate-Nyktipolos-17B mergekit-community/MN-Hekate-Limenoskopos-17B The following YAML configuration was used to produce this model:
mergekit-slerp-qamquir
mergekit-slerp-hwgrlbs
mergekit-passthrough-zpfenfn
Gemma-2-Ataraxy-ActionGemma-LoRA-merged
mergekit-slerp-aflqaqy
passthru-bored-plus-gguf-me-nsfw2-test
mergekit-ties-rraxdhv
dsasd
QwenSpanishR-1.5B
JAJUKA-WEWILLNEVERFORGETYOU-3B
MN-Sappho-n3-12B
llama-3.2-hammered-three
Qwen2.5-32B-qwq-it-slerp2
mergekit-model_stock-rxtwhlc
mergekit-model_stock-lvezkfe
QWQ-Rombos-ties-TEST2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: rombodawg/Rombos-LLM-V2.5-Qwen-32b Qwen/QwQ-32B-Preview The following YAML configuration was used to produce this model:
nsfw-i-like-this-one-plz-kill-me
Slush-Lyra-Gutenberg-Bophades
L3.1-Athena-a-8B
MN-Nyx-Chthonia-12B
MethedUp
nsfw_merge_test_vFFS
hopefully_humanish-rp-nsfw-test-v1
NSFW-FFS-w-hidden-Deepseek-Distill-NSFW
Deepseek-Distill-NSFW-visible-w-NSFW-FFS
MN-Sappho-b-12B
MN-Sappho-n4-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mistralai/Mistral-Nemo-Instruct-2407 as a base. The following models were included in the merge: mergekit-community/MN-Sappho-g3-12B mergekit-community/MN-Sappho-n2-12B mergekit-community/MN-Sappho-n3-12B mergekit-community/MN-Sappho-n-12B mergekit-community/MN-Sappho-j-12B The following YAML configuration was used to produce this model:
Omega-Darker_Slush-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: crestf411/MN-Slush ReadyArt/Omega-DarkerThe-Final-Directive-12B The following YAML configuration was used to produce this model:
mergekit-model_stock-qtseiad
mergekit-ties-vjlpsxw
Fimburs11V3
mergekit-slerp-oztfijl
L3.1-Vulca-Umboshima-8B
Moist_Theia_21B
mergekit-ties-ueirogz
mergekit-sce-vjeombg
MN-Sappho-c-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Khetterman/AbominationScience-12B-v4 as a base. The following models were included in the merge: LatitudeGames/Wayfarer-12B mistralai/Mistral-Nemo-Instruct-2407 mergekit-community/MN-Sappho-b-12B mistralai/Mistral-Nemo-Base-2407 inflatebot/MN-12B-Mag-Mell-R1 mergekit-community/MN-Sappho-a-12B The following YAML configuration was used to produce this model:
Tigers-Abliterated-9B
MN-Sappho-f-12B
Mistral-Small-2501-SCE-Mashup-2-24B
MN-Sappho-k-12B
MN-Sappho-l-12B
Mistral-Small-24B-Merge-V2
L3.1-Athena-f-8B
L3.1-Athena-g-8B
MN-Hekate-Geneteira-12B
Qwen2.5-14B-ties-1M
MN-Hekate-Limenoskopos-12B
MN-Hekate-Noctiluca-12B-v2
mergekit-slerp-vbaesvs
mergekit-slerp-mhsbcqc
mergekit-slerp-gpprpds
mergekit-ties-aspkrwz
mergekit-slerp-rxkhjnf
mergekit-slerp-ieauevl
LLaMa-3-Base-Zeroed-13B
TopEvolution-DPO-32K
TopEvolutionWiz
Qwen2-2B-Dolphin-RepleteCoder
mergekit-slerp-duaqshp
mergekit-ties-liyosfu
grok-13b-chat
mergekit-ties-mtbkpmt
Qwen2.5-32B-Instruct-Coder-Tie
mergekit-ties-duurpfl
test_ArliAI-RPMax_guidance_all_versions_plus_o1-Open-Llama_reflection-llama
mergekit-della_linear-uogzotg
mergekit-della_linear-dbwwdyo
Qwen2.5Minus2-0.5B-Instruct
QwenFocusedCoder2
mergekit-ties-hqqzvmi
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-Coder-0.5B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-0.5B-Instruct The following YAML configuration was used to produce this model:
mergekit-slerp-fmrazcr
mergekit-dare_ties-psqsabe
mergekit-dare_ties-iezesml
mergekit-model_stock-bzcrthr
mergekit-dare_ties-ajgjgea
mergekit-slerp-wduahvh
mergekit-slerp-fgoimpq
mergekit-slerp-dehplhb
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: crestf411/MN-Slush nbeerbower/mistral-nemo-bophades3-12B The following YAML configuration was used to produce this model:
mergekit-model_stock-olgorhm
mergekit-slerp-zbeneng
mergekit-slerp-xeugntu
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/mergekit-sce-xgsvvmh mergekit-community/mergekit-modelstock-hwudfad The following YAML configuration was used to produce this model:
MS3-RP-half1
mergekit-slerp-rayqjvs
mergekit-model_stock-izmzpot
mergekit-slerp-ijgjytz
mergekit-ties-asjuuws
Qwen2.5-32B-it-pro
mergekit-slerp-wgdlrrb
Qwen2.5-14B-della-code
Qwen2.5-14B-1M
mergekit-sce-nsexkut
mergekit-dare_ties-psxhlrx
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: Krystalan/DRT-14B netease-youdao/Confucius-o1-14B huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated The following YAML configuration was used to produce this model:
Holgerim-Llama-7b
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: dominguesm/Canarim-7B-Instruct trollek/Holger-7B-v0.1 The following YAML configuration was used to produce this model:
MN-Hekate-Anassa-17B
mergekit-slerp-ryfxivm
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: sometimesanotion/Lamarck-14B-v0.7 sometimesanotion/Qwenvergence-14B-v11 The following YAML configuration was used to produce this model:
mergekit-della-efwskwi
Qwen2.5-7B-fuse-della
mergekit-slerp-znbfpqv
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/mergekit-slerp-pjjpegi mergekit-community/mergekit-slerp-irynmhm The following YAML configuration was used to produce this model:
mergekit-slerp-qregpbv
mergekit-passthrough-atuidyj
mergekit-della-brubxsv
mergekit-della-mhapspp
Mistral-rp-24b-karcher
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method using cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition as a base. The following models were included in the merge: Gryphe/Pantheon-RP-1.8-24b-Small-3.1 PocketDoc/Dans-DangerousWinds-V1.1.1-24b ReadyArt/Omega-DarkerThe-Final-Directive-24B huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated trashpanda-org/MS-24B-Instruct-Mullein-v0 The following YAML configuration was used to produce this model:
Rombos-QWQ-ties-TEST
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: rombodawg/Rombos-LLM-V2.5-Qwen-32b Qwen/QwQ-32B-Preview The following YAML configuration was used to produce this model:
Qwen2.5-14B-Coder-Merge
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: rombodawg/Rombos-Coder-V2.5-Qwen-14b Qwen/Qwen2.5-Coder-14B-Instruct Qwen/Qwen2.5-Coder-14B The following YAML configuration was used to produce this model:
MN-Sappho-g-12B
UltraLong-Thinking
Irix-12B_Slush_V2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: DreadPoor/Irix-12B-ModelStock mergekit-community/Slush-Lyra-Gutenberg-Bophades The following YAML configuration was used to produce this model:
VirtuosoSmall-InstructModelStock
nsfw-merge-v4dot1-w-deepseek-ablit
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/nsfwmergetestv4dot1 stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated The following YAML configuration was used to produce this model:
MN-Sappho-m-12B
mergekit-dare_ties-twfgema
mergekit-slerp-dclolyo
LLaMa-3-8B-First-8-Layers
mergekit-slerp-rfokseh
L3.1-Artemis-d-8B
mergekit-passthrough-dgucanu
sexeh_time_testing
mergekit-dare_ties-ocypetp
L3.1-Artemis-g-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using kromeurus/L3.1-Ablaze-Vulca-v0.1-8B as a base. The following models were included in the merge: mergekit-community/L3-Boshima-a Sao10K/L3-8B-Lunaris-v1 Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B The following YAML configuration was used to produce this model:
nsfw_merge_testv6
mergekit-passthrough-smmjedo
qwenben
dolphinllamaseekv2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Llama-8B cognitivecomputations/Dolphin3.0-Llama3.1-8B The following YAML configuration was used to produce this model:
mergekit-slerp-bfzxelq
mergekit-slerp-jqcnjsm
Cute_Experiment-8B
MN-Sappho-d-12B
censored-mix
L3.1-Athena-b-8B
L3.1-Athena-j-8B
L3.1-Athena-l-8B
L3.1-Athena-l2-8B
Llama3.3-Grand_Lemonade-70B
nsfw-i-mean-it-plz-kill-me-part2
mergekit-model_stock-tiwlqms
R1-JSON
mergekit-slerp-kxiunve
mergekit-slerp-dieybqi
mergekit-slerp-yebtzzv
mergekit-slerp-gmjodqj
mergekit-slerp-dtieltq
mergekit-slerp-emgmhsf
mergekit-slerp-zwkhacc
mergekit-slerp-wahogcx
mergekit-slerp-uwupwsk
mergekit-slerp-bnhzjvv
mergekit-slerp-zzizhry
mergekit-slerp-jeyctse
llama-world
mergekit-slerp-ueqsixf
mergekit-slerp-qzxjuip
mergekit-slerp-kxeioog
dolphin-mistral-instruct-7b
mergekit-slerp-aywerbb
mergekit-slerp-flctqsu
mergekit-slerp-ynceepa
mergekit-slerp-fodinzo
mergekit-dare_ties-ymiqjtz
mergekit-ties-cmdmayc
mergekit-slerp-ojqhjfr
mergekit-passthrough-anunwkh
mergekit-passthrough-dmirwnd
Llama3-13B-ku
Hermes-2-Pro-Llama-3-13B
mergekit-ties-ujwvugo
LLaMa-3-8B-First-4-Layers
L3-Inverted-Rainbow-RP-v2-OVA-8B
Qwen2-2B-Dolphin-Hercules
Qwen2-2B-RepleteCoder-Hercules
Berry-Spark-7B
mergekit-slerp-zbuqguo
SonnyD
mergekit-ties-gxhsjzj
mergekit-della-kstssvv
mergekit-ties-oysoxmc
mergekit-slerp-epibiuy
mergekit-della_linear-iwescit
mergekit-slerp-wphccbj
mergekit-slerp-qtidaqf
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/UnslopNemo-12B-v4.1 anthracite-org/magnum-v4-12b The following YAML configuration was used to produce this model:
Qwen2.5-32B-Instruct-Coder-Merge-Tool-use
Qwen2.5-7B-Instruct-Coder-Merge-Tool-use
mergekit-della_linear-hvzpnws
mergekit-linear-ugyqudc
mergekit-ties-htjjeox
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using chuanli11/Llama-3.2-3B-Instruct-uncensored as a base. The following models were included in the merge: bunnycore/Llama-3.2-3B-Mix The following YAML configuration was used to produce this model:
mergekit-della_linear-vpjjtsa
final_test_ArliAI-RPMax_guidance_all_versions_plus_top_3_models
This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using mergekit-community/testArliAI-RPMaxguidanceallversionspluso1-Open-Llamareflection-llama as a base. The following models were included in the merge: mergekit-community/mergekit-dellalinear-uogzotg Undi95/Llama3-Unholy-8B-OAS Undi95/Meta-Llama-3.1-8B-Claude vicgalle/Humanish-Roleplay-Llama-3.1-8B The following YAML configuration was used to produce this model:
final_test_2_original_recipe
final_test_3_original_recipe_more_reasoning
final_test_4_original_old_recipe
final_test_4_v2_original_old_recipe_humanish_base
mergekit-della-zgowfmf
MT-Gen3-gemma-2-9B-Flip
mergekit-slerp-bcumecp
mergekit-model_stock-azgztvm
QwenSelfMerge
Qwen-ACTUALLY-Zeroed
test_4_smarts222
mergekit-dare_ties-uyuzvch
mergekit-dare_ties-nlzuacx
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using unsloth/Llama-3.3-70B-Instruct as a base. The following models were included in the merge: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 The following YAML configuration was used to produce this model:
diabolic6045_ELN-AOC-CAIN
mergekit-dare_ties-addnpep
mergekit-task_arithmetic-qjeuqjw
mergekit-model_stock-uyjyafe
CodeMix-JPID-3B-Llama3.2
mergekit-model_stock-rqvzadm
mergekit-model_stock-pjdbpjk
mergekit-ties-olhmfit
mergekit-ties-azrgvqf
Llama-3-LewdPlay-DeepSeek-R1-Distill-8B-abliterated
mergekit-slerp-slxaccf
mergekit-slerp-dgmqjeb
nsfw-another-sce-test-lol1
mergekit-sce-vzszowb
mergekit-sce-iwzxvqr
mergekit-slerp-ljgqjtg
mergekit-slerp-ldgylwv
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: arcee-ai/sec-mistral-7b-instruct-1.6-epoch cognitivecomputations/dolphin-2.8-mistral-7b-v02 The following YAML configuration was used to produce this model:
mergekit-slerp-mirtnuv
Mistral-Small-24B-Merge
r1-0.1776-pocket-version
NSFW-FFS-w-hidden-Deepseek-Distill-NSFW-Redux
mergekit-model_stock-kvunitr
nsfw-yet-another-test-might-be-bad
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using DreadPoor/Noxis-8B-LINEAR + grimjim/Llama-3-Instruct-abliteration-LoRA-8B as a base. The following models were included in the merge: v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + Azazelle/Llama-3-8B-Abomination-LORA v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/health v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + moetezsa/Llama3instructonwikibio v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + kik41/lora-type-descriptive-llama-3-8b-v2 v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + DreadPoor/Everything-COT-8B-r128-LoRA v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/professionalpsychology v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + kik41/lora-length-long-llama-3-8b-v2 v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + eeeebbb2/3aff0ea7-4262-4abb-97b1-1879f340d32e v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/humansexuality v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/formallogic v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/anatomy v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + surya-narayanan/biology v000000/Llama-3.1-8B-Stheno-v3.4-abliterated + ResplendentAI/SmartsLlama3 The following YAML configuration was used to produce this model:
allarma-3.2-hammered
mergekit-slerp-zxgekkl
Qwen2.5-14B-stock-v2
mergekit-slerp-dlsejld
mergekit-model_stock-adqzxpt
UnslopNemo-Mag-Mell_T-1
Qwen2.5-14B-della-v2-dpo
This is a merge of pre-trained language models created using mergekit. This model was merged using the DELLA merge method using arcee-ai/Virtuoso-Small-v2 as a base. The following models were included in the merge: mergekit-community/Qwen2.5-14B-dpo-it Qwen/Qwen2.5-14B-Instruct-1M The following YAML configuration was used to produce this model:
QwQ-slerp1
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Qwen/QwQ-32B Qwen/Qwen2.5-32B-Instruct The following YAML configuration was used to produce this model:
qwq-slerp-3
Qwen2.5-test-14b-it
MN-Hekate-Nykhia-17B
Hermes-3-Remix-L3.2-3b
MN-Hekate-Daidalos-17B
mergekit-dare_ties-lmociuf
mergekit-dare_ties-oqggofa
Qwen2.5-7B-ties
MN-Hekate-Episkopos-17B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/MN-Hekate-Damnomeneia-17B as a base. The following models were included in the merge: nbeerbower/mistral-nemo-bophades-12B mistralai/Mistral-Nemo-Base-2407 ReadyArt/Forgotten-Abomination-12B-v4.0 Nitral-AI/Captain-ErisViolet-GRPO-v0.420 The following YAML configuration was used to produce this model:
mergekit-task_arithmetic-yxycruu
mergekit-karcher-jhklzwv
Qwen2.5-32B-gokgok-step1
Qwen2.5-32B-gokgok-step2
ignore_L3.x-Monk-70B
mergekit-dare_ties-zrurbjl
mergekit-dare_ties-afgxxsc
mergekit-slerp-hkqkozo
mergekit-model_stock-jlodpmg
mergekit-dare_ties-fikucxa
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using mrfakename/mistral-small-3.1-24b-instruct-2503-hf as a base. The following models were included in the merge: ReadyArt/Forgotten-Safeword-24B-v4.0 ReadyArt/Broken-Tutu-24B Sorawiz/MistralCreative-24B-Chat The following YAML configuration was used to produce this model:
mergekit-dare_ties-xejqqxa
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Sorawiz/MistralCreative-24B-Chat as a base. The following models were included in the merge: darkc0de/BlackXorDolphTronGOAT ReadyArt/Forgotten-Safeword-24B-v4.0 aixonlab/Eurydice-24b-v2 The following YAML configuration was used to produce this model:
mergekit-ties-lhhtrme
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using mistralai/Mistral-7B-v0.1 as a base. The following models were included in the merge: mistralai/Mistral-7B-Instruct-v0.2 BioMistral/BioMistral-7B The following YAML configuration was used to produce this model:
Slush-ChatWaifu-Chronos
Deutscher-Pantheon-12B
mergekit-model_stock-pvcszfh
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base. The following models were included in the merge: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0 peakji/steiner-32b-preview The following YAML configuration was used to produce this model:
ohnoes_now_nsfw
nsfw_plz_gguf_me
MN-Hecate-Chthonia-12B
mergekit-slerp-lvhhlmq
TopEvolution
Qwen2-1.5B-RHSD
L3-Boshima-a
Roci-Maxx
L3.1-Pneuma-8B-v1
GutenBerg_Nyxora_magnum-v4-27b
This is a merge of pre-trained language models created using mergekit. This model was merged using the linear merge method using anthracite-org/magnum-v4-27b as a base. The following models were included in the merge: DazzlingXeno/GutenBergNyxora The following YAML configuration was used to produce this model:
UnslopNemo-v4.1-Magnum-v4-12B
mergekit-della_linear-sxmadrj
Qwen2.5-14B-Merge
hopefully_humanish-rp-nsfw-test-v-retry
R1-ImpishMind-8B
Slush-ChatWaifu-Rocinante-sunfall-Wayfarer
Slush-Sunfall-Rocinante-GGLD-12B
Slush-FallMix-12B
24B-MS-PRO-V0.01
2xPIMPY3xBAPE-OPP5
mergekit-slerp-kvkcnhb
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: elinas/Chronos-Gold-12B-1.0 crestf411/MN-Slush The following YAML configuration was used to produce this model:
MN-Sappho-a-12B
MN-Sappho-g2-12B
MN-Chthonia-12B
nsfw-i-mean-it-plz-kill-me
MN-Hekate-Panopaia-12B
Qwen2.5-14B-dpo-it-ties
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: mergekit-community/Qwen2.5-14B-dpo-it The following YAML configuration was used to produce this model:
mergekit-sce-sudfgqi
Irix-12B_Slush
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: crestf411/MN-Slush DreadPoor/Irix-12B-ModelStock The following YAML configuration was used to produce this model:
mergekit-ties-jnhzatj
mergekit-slerp-rijglhb
mergekit-slerp-ebgdloh
L3.1-Romes-Ninomos-Maxxing
Berry-Spark-7B-Fix
L3.1-Vulca-Umboshima-8B-bf16
L3.1-Artemis-a-8B
L3.1-Artemis-b-8B
L3.1-Boshima-b
L3.1-Boshima-b-FIX
L3.1-Artemis-dcd-12B
L3.1-Artemis-faustus-8B
L3.1-15B-EtherealMaid-t0.0001-alpha
SthenoMix3.3
NM-StarUnleashed
L3.1-Artemis-e2-8B
Qwen2.5-Mavapy-b-7B
Q2.5-14B-Evalternagar
qwen2.5-11B-Mzy
mergekit-slerp-xnqoryq
mergekit-slerp-hayztti
LLaMa-3.1-Instruct-Zeroed-13B
This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: unsloth/Meta-Llama-3.1-8B-Instruct The following YAML configuration was used to produce this model:
oculus-alpha
mergekit-slerp-kdchnjo
mergekit-model_stock-rxbbxes
another_nsfw_test
ol_faithful_nsfw_32bit
mergekit-task_arithmetic-haaopre
nsfw-merged-test
nsfw_merge_testv2
mergekit-passthrough-ywynqau
L3.1-Orion-a-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using P0x0/Epos-8b as a base. The following models were included in the merge: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B Sao10K/L3-8B-Lunaris-v1 mergekit-community/L3-Boshima-a The following YAML configuration was used to produce this model:
because-im-bored-nsfw2-linear
This is a merge of pre-trained language models created using mergekit. This model was merged using the linear merge method. The following models were included in the merge: mergekit-community/becauseimborednsfw1 + Azazelle/Llama-3-LongStory-LORA mergekit-community/becauseimborednsfw1 + kik41/lora-type-descriptive-llama-3-8b-v2 mergekit-community/becauseimborednsfw1 + kik41/lora-length-long-llama-3-8b-v2 The following YAML configuration was used to produce this model:
llasa-3b-upscaled
This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: srinivasbilla/llasa-3b The following YAML configuration was used to produce this model:
mergekit-linear-enaoxvi
Llama3.1-16B-Upscaled
mergekit-model_stock-nvgaatl
mergekit-sce-azzpiqv
DeepVeo-R1-B
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using Qwen/Qwen2.5-1.5B-Instruct as a base. The following models were included in the merge: Alfitaria/Q25-1.5B-VeoLu deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B The following YAML configuration was used to produce this model:
L3.1-Artemis-h-8B
ChatWaifu-Wayfarer-Sunfall
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/ChatWaifu-Wayfarer-12B crestf411/nemo-sunfall-v0.6.1 The following YAML configuration was used to produce this model:
mergekit-slerp-madwjrw
nsfw-i-hate-my-life-v2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mergekit-community/L3.1-Artemis-h-8B mergekit-community/nsfw-i-hate-my-life-v1 The following YAML configuration was used to produce this model:
AngelSlayer-Slush-12B
Llama-3-LewdPlay-evo-DeepSeek-R1-Distill-8B
CW-Stock
Slush-FallMix-Test_V2_12B
Slush-FallMix-Test_V5_12B
mergekit-ties-vbqvheo
Slush-FallMix-Test_V6c_12B
Slush-FallMix-Fire_Edition_1.0-12B
Llama3.1-8B-NormalMix
mergekit-della-nwsztat
MN-Sappho-e-12B
nsfw-sce-test-2
Llama-3.3-Super-Mini-Instruct
MN-Sappho-jlcj-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: mergekit-community/MN-Sappho-l-12B mergekit-community/MN-Sappho-j-12B mergekit-community/MN-Sappho-c-12B The following YAML configuration was used to produce this model:
MN-Sappho-g3-12B
L3.1-Athena-c-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/L3.1-Athena-b-8B as a base. The following models were included in the merge: Skywork/Skywork-o1-Open-Llama-3.1-8B DavidAU/L3-Dark-Planet-8B DavidAU/DeepSeek-BlackRoot-R1-Distill-Llama-3.1-8B DavidAU/L3-Dark-Planet-8B-V2-Eight-Orbs-Of-Power DavidAU/L3.1-RP-Hero-BigTalker-8B deepseek-ai/DeepSeek-R1-Distill-Llama-8B Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B MathGenie/MathCoder2-Llama-3-8B mergekit-community/L3.1-Orion-a-8B Skywork/Skywork-Critic-Llama-3.1-8B meta-llama/Llama-3.1-8B mergekit-community/L3.1-Artemis-h-8B Sao10K/L3-8B-Lunaris-v1 mergekit-community/L3.1-Athena-a-8B The following YAML configuration was used to produce this model:
L3.1-Athena-d-8B
L3.1-Athena-e-8B
L3.1-Athena-h-8B
L3.1-Athena-i-8B
L3.1-Athena-k-8B
MN-Sappho-n-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mistralai/Mistral-Nemo-Base-2407 as a base. The following models were included in the merge: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 inflatebot/MN-12B-Mag-Mell-R1 LatitudeGames/Wayfarer-12B mistralai/Mistral-Nemo-Instruct-2407 Khetterman/AbominationScience-12B-v4 Nitral-Archive/Diogenes-12B yuyouyu/Mistral-Nemo-BD-RP mergekit-community/MN-Sappho-j-12B DavidAU/MN-Dark-Planet-TITAN-12B ToastyPigeon/Sto-vo-kor-12B Khetterman/DarkAtom-12B-v3 nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B mergekit-community/MN-Sappho-g3-12B PygmalionAI/Eleusis-12B PocketDoc/Dans-PersonalityEngine-V1.1.0-12b The following YAML configuration was used to produce this model:
Llama3.3-Grand-Skibidi-70B
MN-Ephemeros-12B
mergekit-della_linear-gznziez
This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear DELLA merge method using IlyaGusev/saiganemo12b as a base. The following models were included in the merge: MarinaraSpaghetti/NemoMix-Unleashed-12B TheDrummer/Rocinante-12B-v1.1 Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 The following YAML configuration was used to produce this model:
nsfw-sce-test-2-redux
Deepseek-R1-Distill-NSFW-RP-vRedux-Proper
L3.1-Athena-l3-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using deepseek-ai/DeepSeek-R1-Distill-Llama-8B as a base. The following models were included in the merge: mergekit-community/L3.1-Athena-l2-8B Skywork/Skywork-Critic-Llama-3.1-8B Skywork/Skywork-o1-Open-Llama-3.1-8B mergekit-community/L3.1-Athena-j-8B + kik41/lora-type-descriptive-llama-3-8b-v2 MathGenie/MathCoder2-Llama-3-8B meta-llama/Llama-3.1-8B mergekit-community/L3.1-Athena-d-8B + kik41/lora-length-long-llama-3-8b-v2 DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B + kik41/lora-length-long-llama-3-8b-v2 mergekit-community/L3.1-Athena-i-8B + kik41/lora-length-long-llama-3-8b-v2 kromeurus/L3.1-Clouded-Uchtave-v0.1-8B + kik41/lora-type-descriptive-llama-3-8b-v2 NousResearch/DeepHermes-3-Llama-3-8B-Preview nothingiisreal/L3.1-8B-Celeste-V1.5 + vincentyandex/lorallama3chunkednovelbs128 DavidAU/L3.1-RP-Hero-BigTalker-8B + vincentyandex/lorallama3chunkednovelbs128 AtlaAI/Selene-1-Mini-Llama-3.1-8B normster/RealGuardrails-Llama3.1-8B-SFT meta-llama/Llama-3.1-8B-Instruct The following YAML configuration was used to produce this model:
L3.1-Athena-l4-8B
L3.1-Athena-m-8B
nsfw-back-to-model-stock
L3.1-Athena-n-8B
MN-Hekate-Kleidoukhos-12B
MN-Hekate-Enodia-12B
MN-Hekate-Ekklesia-12B
MN-Hekate-Deichteira-12B
Qwen2.5-14B-dpo-it-della
DeeperHermes3_R1_D_L3_8b
Panth-L3-Blackroot-Nephra-MK.VI-8B
Qwen2.5-14B-stock-v3
Qwen2.5-14B-della
This is a merge of pre-trained language models created using mergekit. This model was merged using the DELLA merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: Qwen/Qwen2.5-14B-instruct The following YAML configuration was used to produce this model:
MN-Hekate-Damnomeneia-17B
Qwen2.5-14B-ties
nsfw_ts_too_late_im_burnt_to_a_crisp
MN-Hekate-Limenoskopos-17B
QwQ-openhands-Code-32B
openhands-Nemotron-32B-karcher
openhands-Nemotron-32B-karcher-300
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: all-hands/openhands-lm-32b-v0.1 nvidia/OpenMath-Nemotron-32B The following YAML configuration was used to produce this model:
MN-Hekate-Noctiluca-12B
Phi-4-reasoning-Line-14b-karcher
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method using huihui-ai/phi-4-abliterated as a base. The following models were included in the merge: AXCXEPT/phi-4-deepseek-R1K-RL-EZO microsoft/Phi-4-reasoning-plus microsoft/Phi-4-reasoning The following YAML configuration was used to produce this model:
MN-Hekate-Pandamateira-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using mergekit-community/MN-Hekate-Noctiluca-12B-v2 as a base. The following models were included in the merge: Lambent/Gilded-Arsenic-12B mergekit-community/MN-Sappho-j-12B nbeerbower/mistral-nemo-gutenberg-12B-v4 nbeerbower/mistral-nemo-bophades-12B mistralai/Mistral-Nemo-Base-2407 mergekit-community/MN-Hekate-Limenoskopos-17B nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B-v2 The following YAML configuration was used to produce this model: