yamatazen

102 models • 9 total models in database
Sort by:

FusionEngine 12B Lorablated

- Base Model: `yamatazen/FusionEngine-12B` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning.

NaNK
534
9

NeonMaid 12B V2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using C:\Users\yamat\Desktop\text-generation-webui\userdata\models\NeonMaid-12B as a base. The following models were included in the merge: C:\Users\yamat\Desktop\text-generation-webui\userdata\models\Orihime-Gutenberg-12B The following YAML configuration was used to produce this model:

NaNK
355
11

EsotericSage 12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the NearSwap merge method using yamatazen/LinearWriter-12B as a base. The following models were included in the merge: yamatazen/ForgottenMaid-12B The following YAML configuration was used to produce this model:

NaNK
263
8

SnowElf 12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using yamatazen/HMS-Slerp-12B-v2 as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 nbeerbower/mistral-nemo-gutenberg-12B-v4 PocketDoc/Dans-PersonalityEngine-V1.1.0-12b The following YAML configuration was used to produce this model:

NaNK
246
7

LorablatedStock 12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using C:\Users\yamat\Desktop\text-generation-webui\userdata\models\HMS-Fusion-12B-Lorablated as a base. The following models were included in the merge: C:\Users\yamat\Desktop\text-generation-webui\userdata\models\ForgottenMaid-12B-Lorablated C:\Users\yamat\Desktop\text-generation-webui\userdata\models\FusionEngine-12B-Lorablated The following YAML configuration was used to produce this model:

NaNK
228
18

Gemma2-Snowflakes-9B

NaNK
214
6

EtherealAurora 12B Lorablated

Created with this tool: https://huggingface.co/spaces/jukofyork/merge-lora

NaNK
license:apache-2.0
82
3

Gemma2-Snowflakes-9B-Q4_K_M-GGUF

NaNK
llama-cpp
73
0

EtherealAurora-12B-Lorablated-Q4_K_M-GGUF

NaNK
llama-cpp
64
0

Aurora SCE 12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base. The following models were included in the merge: LatitudeGames/Wayfarer-12B NeverSleep/Lumimaid-v0.2-12B Elizezen/Himeyuri-v0.1-12B inflatebot/MN-12B-Mag-Mell-R1 The following YAML configuration was used to produce this model:

NaNK
48
15

SnowElf-12B-Q4_K_M-GGUF

NaNK
llama-cpp
24
0

LinearWriter-12B

NaNK
20
3

DellaMix-12B

NaNK
18
6

EtherealAurora-12B-Q4_K_M-GGUF

NaNK
llama-cpp
15
0

ForgottenMaid 12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using yamatazen/LoyalMaid-12B as a base. The following models were included in the merge: ReadyArt/Forgotten-Safeword-12B-v4.0 The following YAML configuration was used to produce this model:

NaNK
14
5

EsotericKnowledge-24B-Q3_K_M-GGUF

NaNK
llama-cpp
12
0

Shisa-K-12B

NaNK
11
2

Aurora-SCE-12B-Q4_K_M-GGUF

NaNK
llama-cpp
11
1

FlickeringLight-14B-Q4_K_M-GGUF

NaNK
llama-cpp
11
0

Shisa-v2-Mistral-Nemo-12B-Abliterated-Q4_K_M-GGUF

NaNK
llama-cpp
10
0

SnowElf-12B-v2-Q4_K_M-GGUF

NaNK
llama-cpp
10
0

EtherealAurora-12B

NaNK
license:apache-2.0
8
12

Twilight-SCE-12B-v2

NaNK
8
7

Luna-Karcher-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: unsloth/Mistral-Nemo-Base-2407 Elizezen/Himeyuri-v0.1-12B shisa-ai/shisa-v2-mistral-nemo-12b The following YAML configuration was used to produce this model:

NaNK
8
3

Orihime-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using shisa-ai/shisa-v2-mistral-nemo-12b as a base. The following models were included in the merge: Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:

NaNK
8
2

BlueLight-12B

NaNK
7
6

Gemma2-ObsidianLight-9B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: yamatazen/Gemma2-Ataraxy-Psycho-9B yamatazen/Gemma2-Evelyn-9B The following YAML configuration was used to produce this model:

NaNK
7
0

EtherealAurora-12B-v3-Q4_K_M-GGUF

NaNK
llama-cpp
6
1

EtherealAurora-12B-v2

NaNK
license:apache-2.0
5
27

EsotericKnowledge-24B

NaNK
5
6

NeonMaid-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using yamatazen/Orihime-12B as a base. The following models were included in the merge: C:\Users\yamat\Desktop\text-generation-webui\userdata\models\ForgottenMaid-12B Delta-Vector/Francois-PE-V2-Huali-12B Delta-Vector/Ohashi-NeMo-12B The following YAML configuration was used to produce this model:

NaNK
5
4

EsotericLight-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using C:\Users\yamat\Desktop\text-generation-webui\userdata\models\Orihime-12B as a base. The following models were included in the merge: yamatazen/EtherealAurora-12B The following YAML configuration was used to produce this model:

NaNK
5
3

FusionEngine-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b as a base. The following models were included in the merge: Delta-Vector/Francois-PE-V2-Huali-12B The following YAML configuration was used to produce this model:

NaNK
5
2

Gemma2-Evelyn-9B-Q4_K_M-GGUF

NaNK
llama-cpp
5
0

BlueLight-12B-Q4_K_M-GGUF

NaNK
llama-cpp
5
0

Ayla-Light-12B-v2

NaNK
4
7

Shisa-v2-Mistral-Nemo-12B-Abliterated

NaNK
4
4

HMS-Fusion-12B

NaNK
4
3

Twilight-SCE-12B-v2-Q4_K_M-GGUF

NaNK
llama-cpp
4
2

Shisa-Himeyuri-Nearswap-t0.0005-12B

This model was created to test the nearswap method. Shisa-Himeyuri-Nearswap-t0.0005-12B This is a merge of pre-trained language models created using mergekit. This model was merged using the NearSwap merge method using shisa-ai/shisa-v2-mistral-nemo-12b as a base. The following models were included in the merge: Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:

NaNK
4
1

ForgottenMaid-12B-LoRA-Rank128

This is a LoRA extracted from a language model. It was extracted using mergekit. This LoRA adapter was extracted from yamatazen/ForgottenMaid-12B and uses unsloth/Mistral-Nemo-Instruct-2407 as a base. The following command was used to extract this LoRA adapter:

NaNK
4
1

EtherealAurora-12B-v2-Q4_K_M-GGUF

NaNK
llama-cpp
4
0

ElvenMaid-12B-v2-Q4_K_M-GGUF

NaNK
llama-cpp
4
0

Himeyuri-Magnum-12B

NaNK
3
2

HMS-Slerp-12B

NaNK
3
2

Emilia-Multislerp-12B

This is probably the first multislerp model on Hugging Face. Emilia-Multislerp-12B This is a merge of pre-trained language models created using mergekit. This model was merged using the Multi-SLERP merge method using yamatazen/Orihime-12B as a base. The following models were included in the merge: natong19/Mistral-Nemo-Instruct-2407-abliterated nbeerbower/mistral-nemo-gutenberg-12B-v4 The following YAML configuration was used to produce this model:

NaNK
3
2

Aurora SCE 12B V2

This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 NeverSleep/Lumimaid-v0.2-12B Elizezen/Himeyuri-v0.1-12B cyberagent/Mistral-Nemo-Japanese-Instruct-2408 nbeerbower/mistral-nemo-gutenberg-12B-v4 The following YAML configuration was used to produce this model:

NaNK
3
2

ForgottenMaid-12B-Lorablated

- Base Model: `yamatazen/ForgottenMaid-12B` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning.

NaNK
3
1

PerpetualNight-12B-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

Himeyuri-Magnum-12B-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

ElvenMaid-12B-Stock-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

Ayla-Light-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: nbeerbower/mistral-nemo-gutenberg-12B-v4 + nbeerbower/Mistral-Nemo-12B-abliterated-LORA Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:

NaNK
2
4

Gemma2-Alicia-9B

NaNK
2
3

Ayla-Light-12B-v3

NaNK
2
2

Orihime-Gutenberg-12B

NaNK
2
2

ForgottenMaid-12B-v2

NaNK
2
2

Shirayukihime-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using natong19/Mistral-Nemo-Instruct-2407-abliterated as a base. The following models were included in the merge: shisa-ai/shisa-v2-mistral-nemo-12b Elizezen/Himeyuri-v0.1-12B The following YAML configuration was used to produce this model:

NaNK
2
2

Ayla-Light-Extra

An experimental frankenmerge (upscaled) model. merge This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: yamatazen/Ayla-Light-12B-Stock The following YAML configuration was used to produce this model:

NaNK
2
1

Eris-Light-12B-v2

NaNK
2
1

Amelia-SCE-12B

NaNK
2
1

Shirayuki-SCE-9B

NaNK
2
1

EtherealMoon-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base. The following models were included in the merge: NeverSleep/Lumimaid-v0.2-12B HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 Elizezen/Himeyuri-v0.1-12B TheDrummer/Rocinante-12B-v1.1 inflatebot/MN-12B-Mag-Mell-R1 nbeerbower/mistral-nemo-gutenberg-12B-v4 The following YAML configuration was used to produce this model:

NaNK
2
1

EtherealNight-12B

NaNK
2
1

LoyalMaid-12B-Q4_K_M-GGUF

NaNK
llama-cpp
2
1

MidnightMoon-16B

NaNK
2
1

Gemma2-Ataraxy-Psycho-9B

This is a merge of pre-trained language models created using mergekit. This model was merged using the DELLA merge method using UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 as a base. The following models were included in the merge: lemon07r/Gemma-2-Ataraxy-v4d-9B ehristoforu/Gemma2-9B-it-psy10k-mentalhealth The following YAML configuration was used to produce this model:

NaNK
2
1

Gemma2-Alicia-9B-Q4_K_M-GGUF

NaNK
llama-cpp
2
0

LoyalMaid-12B

NaNK
1
6

EtherealAurora-12B-v3

NaNK
1
3

Ayla-Light-12B-Stock

NaNK
1
2

FlickeringLight-14B

NaNK
1
2

Shisa-DellaTest-12B

NaNK
1
2

Iris-Light-12B

NaNK
1
1

L3-GothicMaid-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Sao10K/L3-8B-Stheno-v3.2 as a base. The following models were included in the merge: FPHam/L3-8B-Everything-COT HumanLLMs/Human-Like-LLama3-8B-Instruct The following YAML configuration was used to produce this model:

NaNK
llama
1
1

L3-GothicMaid-8B-Q4_K_M-GGUF

yamatazen/L3-GothicMaid-8B-Q4KM-GGUF This model was converted to GGUF format from `yamatazen/L3-GothicMaid-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

L3-GothicMaid-Upscaled-11B-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

Himeyuri-Magnum-12B-v2-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

ElvenMaid-12B-v3-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

ElvenMaid-12B-v2

NaNK
0
4

L3-GothicMaid-Upscaled-11B

NaNK
llama
0
3

ElvenMaid-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b as a base. The following models were included in the merge: yamatazen/LoyalMaid-12B inflatebot/MN-12B-Mag-Mell-R1 yamatazen/Himeyuri-Magnum-12B The following YAML configuration was used to produce this model:

NaNK
0
3

SnowElf-12B-v2

NaNK
0
3

Irida-SCE-9B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using IlyaGusev/gemma-2-9b-it-abliterated as a base. The following models were included in the merge: AXCXEPT/EZO-Humanities-9B-gemma-2-it AXCXEPT/EZO-Common-9B-gemma-2-it lemon07r/Gemma-2-Ataraxy-v4d-9B The following YAML configuration was used to produce this model:

NaNK
0
2

NightWind-12B

NaNK
0
2

ElvenMaid-12B-v3

NaNK
0
2

ElvenMaid-12B-Stock

NaNK
0
2

StarrySky-12B

NaNK
0
2

Twilight-SCE-12B

NaNK
0
2

HMS-Slerp-12B-v2

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: yamatazen/Himeyuri-Magnum-12B yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated The following YAML configuration was used to produce this model:

NaNK
0
2

KnowledgeCore-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Arcee Fusion merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b as a base. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 The following YAML configuration was used to produce this model:

NaNK
0
2

Shisa-v2-Mistral-Nemo-12B-Lorablated

- Base Model: `shisa-ai/shisa-v2-mistral-nemo-12b` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning. Code for LoRA merging (Generated by Qwen3)

NaNK
0
2

EtherealLight-12B

NaNK
0
1

Eris-Light-12B

NaNK
0
1

Iris-SCE-12B

NaNK
0
1

PerpetualNight-12B

NaNK
0
1

Gemma2-BlueMoon-9B

NaNK
0
1

Himeyuri-Magnum-12B-v2

NaNK
0
1

Gemma2-Evelyn-9B

NaNK
0
1

HMS-Slerp-12B-Q4_K_M-GGUF

NaNK
llama-cpp
0
1

HMS-Slerp-12B-v2-Q4_K_M-GGUF

yamatazen/HMS-Slerp-12B-v2-Q4KM-GGUF This model was converted to GGUF format from `yamatazen/HMS-Slerp-12B-v2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

KunoichiFusion-7B

NaNK
0
1

HMS-Fusion-12B-Lorablated

- Base Model: `yamatazen/HMS-Fusion-12B` - LoRA Adapter: `nbeerbower/Mistral-Nemo-12B-abliterated-LORA` The model is saved in `bfloat16` format and is ready for deployment or fine-tuning.

NaNK
0
1