yamatazen
FusionEngine 12B Lorablated
- Base Model: `yamatazen/FusionEngine-12B` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning.
NeonMaid 12B V2
This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using C:\Users\yamat\Desktop\text-generation-webui\userdata\models\NeonMaid-12B as a base. The following models were included in the merge: C:\Users\yamat\Desktop\text-generation-webui\userdata\models\Orihime-Gutenberg-12B The following YAML configuration was used to produce this model:
EsotericSage 12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the NearSwap merge method using yamatazen/LinearWriter-12B as a base. The following models were included in the merge: yamatazen/ForgottenMaid-12B The following YAML configuration was used to produce this model:
SnowElf 12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using yamatazen/HMS-Slerp-12B-v2 as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 nbeerbower/mistral-nemo-gutenberg-12B-v4 PocketDoc/Dans-PersonalityEngine-V1.1.0-12b The following YAML configuration was used to produce this model:
LorablatedStock 12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using C:\Users\yamat\Desktop\text-generation-webui\userdata\models\HMS-Fusion-12B-Lorablated as a base. The following models were included in the merge: C:\Users\yamat\Desktop\text-generation-webui\userdata\models\ForgottenMaid-12B-Lorablated C:\Users\yamat\Desktop\text-generation-webui\userdata\models\FusionEngine-12B-Lorablated The following YAML configuration was used to produce this model:
Gemma2-Snowflakes-9B
EtherealAurora 12B Lorablated
Created with this tool: https://huggingface.co/spaces/jukofyork/merge-lora
Gemma2-Snowflakes-9B-Q4_K_M-GGUF
EtherealAurora-12B-Lorablated-Q4_K_M-GGUF
Aurora SCE 12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base. The following models were included in the merge: LatitudeGames/Wayfarer-12B NeverSleep/Lumimaid-v0.2-12B Elizezen/Himeyuri-v0.1-12B inflatebot/MN-12B-Mag-Mell-R1 The following YAML configuration was used to produce this model:
SnowElf-12B-Q4_K_M-GGUF
LinearWriter-12B
DellaMix-12B
EtherealAurora-12B-Q4_K_M-GGUF
ForgottenMaid 12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using yamatazen/LoyalMaid-12B as a base. The following models were included in the merge: ReadyArt/Forgotten-Safeword-12B-v4.0 The following YAML configuration was used to produce this model:
EsotericKnowledge-24B-Q3_K_M-GGUF
Shisa-K-12B
Aurora-SCE-12B-Q4_K_M-GGUF
FlickeringLight-14B-Q4_K_M-GGUF
Shisa-v2-Mistral-Nemo-12B-Abliterated-Q4_K_M-GGUF
SnowElf-12B-v2-Q4_K_M-GGUF
EtherealAurora-12B
Twilight-SCE-12B-v2
Luna-Karcher-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: unsloth/Mistral-Nemo-Base-2407 Elizezen/Himeyuri-v0.1-12B shisa-ai/shisa-v2-mistral-nemo-12b The following YAML configuration was used to produce this model:
Orihime-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using shisa-ai/shisa-v2-mistral-nemo-12b as a base. The following models were included in the merge: Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:
BlueLight-12B
Gemma2-ObsidianLight-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: yamatazen/Gemma2-Ataraxy-Psycho-9B yamatazen/Gemma2-Evelyn-9B The following YAML configuration was used to produce this model:
EtherealAurora-12B-v3-Q4_K_M-GGUF
EtherealAurora-12B-v2
EsotericKnowledge-24B
NeonMaid-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using yamatazen/Orihime-12B as a base. The following models were included in the merge: C:\Users\yamat\Desktop\text-generation-webui\userdata\models\ForgottenMaid-12B Delta-Vector/Francois-PE-V2-Huali-12B Delta-Vector/Ohashi-NeMo-12B The following YAML configuration was used to produce this model:
EsotericLight-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using C:\Users\yamat\Desktop\text-generation-webui\userdata\models\Orihime-12B as a base. The following models were included in the merge: yamatazen/EtherealAurora-12B The following YAML configuration was used to produce this model:
FusionEngine-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b as a base. The following models were included in the merge: Delta-Vector/Francois-PE-V2-Huali-12B The following YAML configuration was used to produce this model:
Gemma2-Evelyn-9B-Q4_K_M-GGUF
BlueLight-12B-Q4_K_M-GGUF
Ayla-Light-12B-v2
Shisa-v2-Mistral-Nemo-12B-Abliterated
HMS-Fusion-12B
Twilight-SCE-12B-v2-Q4_K_M-GGUF
Shisa-Himeyuri-Nearswap-t0.0005-12B
This model was created to test the nearswap method. Shisa-Himeyuri-Nearswap-t0.0005-12B This is a merge of pre-trained language models created using mergekit. This model was merged using the NearSwap merge method using shisa-ai/shisa-v2-mistral-nemo-12b as a base. The following models were included in the merge: Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:
ForgottenMaid-12B-LoRA-Rank128
This is a LoRA extracted from a language model. It was extracted using mergekit. This LoRA adapter was extracted from yamatazen/ForgottenMaid-12B and uses unsloth/Mistral-Nemo-Instruct-2407 as a base. The following command was used to extract this LoRA adapter:
EtherealAurora-12B-v2-Q4_K_M-GGUF
ElvenMaid-12B-v2-Q4_K_M-GGUF
Himeyuri-Magnum-12B
HMS-Slerp-12B
Emilia-Multislerp-12B
This is probably the first multislerp model on Hugging Face. Emilia-Multislerp-12B This is a merge of pre-trained language models created using mergekit. This model was merged using the Multi-SLERP merge method using yamatazen/Orihime-12B as a base. The following models were included in the merge: natong19/Mistral-Nemo-Instruct-2407-abliterated nbeerbower/mistral-nemo-gutenberg-12B-v4 The following YAML configuration was used to produce this model:
Aurora SCE 12B V2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 NeverSleep/Lumimaid-v0.2-12B Elizezen/Himeyuri-v0.1-12B cyberagent/Mistral-Nemo-Japanese-Instruct-2408 nbeerbower/mistral-nemo-gutenberg-12B-v4 The following YAML configuration was used to produce this model:
ForgottenMaid-12B-Lorablated
- Base Model: `yamatazen/ForgottenMaid-12B` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning.
PerpetualNight-12B-Q4_K_M-GGUF
Himeyuri-Magnum-12B-Q4_K_M-GGUF
ElvenMaid-12B-Stock-Q4_K_M-GGUF
Ayla-Light-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: nbeerbower/mistral-nemo-gutenberg-12B-v4 + nbeerbower/Mistral-Nemo-12B-abliterated-LORA Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:
Gemma2-Alicia-9B
Ayla-Light-12B-v3
Orihime-Gutenberg-12B
ForgottenMaid-12B-v2
Shirayukihime-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: shisa-ai/shisa-v2-mistral-nemo-12b Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:
Ayla-Light-Extra
An experimental frankenmerge (upscaled) model. merge This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: yamatazen/Ayla-Light-12B-Stock The following YAML configuration was used to produce this model:
Eris-Light-12B-v2
Amelia-SCE-12B
Shirayuki-SCE-9B
EtherealMoon-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base. The following models were included in the merge: NeverSleep/Lumimaid-v0.2-12B HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 Elizezen/Himeyuri-v0.1-12B TheDrummer/Rocinante-12B-v1.1 inflatebot/MN-12B-Mag-Mell-R1 nbeerbower/mistral-nemo-gutenberg-12B-v4 The following YAML configuration was used to produce this model:
EtherealNight-12B
LoyalMaid-12B-Q4_K_M-GGUF
MidnightMoon-16B
Gemma2-Ataraxy-Psycho-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the DELLA merge method using UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 as a base. The following models were included in the merge: lemon07r/Gemma-2-Ataraxy-v4d-9B ehristoforu/Gemma2-9B-it-psy10k-mentalhealth The following YAML configuration was used to produce this model:
Gemma2-Alicia-9B-Q4_K_M-GGUF
LoyalMaid-12B
EtherealAurora-12B-v3
Ayla-Light-12B-Stock
FlickeringLight-14B
Shisa-DellaTest-12B
Iris-Light-12B
L3-GothicMaid-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Sao10K/L3-8B-Stheno-v3.2 as a base. The following models were included in the merge: FPHam/L3-8B-Everything-COT HumanLLMs/Human-Like-LLama3-8B-Instruct The following YAML configuration was used to produce this model:
L3-GothicMaid-8B-Q4_K_M-GGUF
yamatazen/L3-GothicMaid-8B-Q4KM-GGUF This model was converted to GGUF format from `yamatazen/L3-GothicMaid-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
L3-GothicMaid-Upscaled-11B-Q4_K_M-GGUF
Himeyuri-Magnum-12B-v2-Q4_K_M-GGUF
ElvenMaid-12B-v3-Q4_K_M-GGUF
ElvenMaid-12B-v2
L3-GothicMaid-Upscaled-11B
ElvenMaid-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b as a base. The following models were included in the merge: yamatazen/LoyalMaid-12B inflatebot/MN-12B-Mag-Mell-R1 yamatazen/Himeyuri-Magnum-12B The following YAML configuration was used to produce this model:
SnowElf-12B-v2
Irida-SCE-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using IlyaGusev/gemma-2-9b-it-abliterated as a base. The following models were included in the merge: AXCXEPT/EZO-Humanities-9B-gemma-2-it AXCXEPT/EZO-Common-9B-gemma-2-it lemon07r/Gemma-2-Ataraxy-v4d-9B The following YAML configuration was used to produce this model:
NightWind-12B
ElvenMaid-12B-v3
ElvenMaid-12B-Stock
StarrySky-12B
Twilight-SCE-12B
HMS-Slerp-12B-v2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: yamatazen/Himeyuri-Magnum-12B yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated The following YAML configuration was used to produce this model:
KnowledgeCore-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 The following YAML configuration was used to produce this model:
Shisa-v2-Mistral-Nemo-12B-Lorablated
- Base Model: `shisa-ai/shisa-v2-mistral-nemo-12b` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning. Code for LoRA merging (Generated by Qwen3)
EtherealLight-12B
Eris-Light-12B
Iris-SCE-12B
PerpetualNight-12B
Gemma2-BlueMoon-9B
Himeyuri-Magnum-12B-v2
Gemma2-Evelyn-9B
HMS-Slerp-12B-Q4_K_M-GGUF
HMS-Slerp-12B-v2-Q4_K_M-GGUF
yamatazen/HMS-Slerp-12B-v2-Q4KM-GGUF This model was converted to GGUF format from `yamatazen/HMS-Slerp-12B-v2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
KunoichiFusion-7B
HMS-Fusion-12B-Lorablated
- Base Model: `yamatazen/HMS-Fusion-12B` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning.