knifeayumu

25 models • 2 total models in database
Sort by:

LLM_Collection

Notable Deleted Models: - `OnlyChat-Miqu-v1.q4km.gguf` (It was up for few hours [?] by OnlyThings and got deleted on HF) - `NeteLegacy-13B.q5km.gguf` (I believe it's the first version of Nete by Undi95, deleted due to of being too NovelAI)

19,710
12

Cydonia-v1.3-Magnum-v4-22B-GGUF

NaNK
2,731
23

Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-GGUF

Llamacpp Quantizations of knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B Original model: knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B | Filename | Quant type | File Size | | -------- | ---------- | --------- | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-F16.gguf | F16 | 47.15 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q80.gguf | Q80 | 25.05 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q6K.gguf | Q6K | 19.35 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q5KM.gguf | Q5KM | 16.76 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q5KS.gguf | Q5KS | 16.30 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q4KM.gguf | Q4KM | 14.33 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q4KS.gguf | Q4KS | 13.55 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q3KL.gguf | Q3KL | 12.40 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q3KM.gguf | Q3KM | 11.47 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q3KS.gguf | Q3KS | 10.40 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q2K.gguf | Q2K | 8.89 GB | Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B. The PNG file above includes workflow for FLUX Kontext Dev with ComfyUI utilising pollockjj/ComfyUI-MultiGPU nodes and two input images without stitching. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Cydonia-24B-v4.1 Doctor-Shotgun/MS3.2-24B-Magnum-Diamond The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
985
5

Cydonia-v1.2-Magnum-v4-22B-GGUF

NaNK
349
4

Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF

NaNK
license:apache-2.0
169
0

Lite-Cydonia-22B-v1.1-50-50-GGUF

NaNK
144
0

Llama-3.1-Herrsimian-8B-GGUF

NaNK
base_model:lemonilia/Llama-3.1-Herrsimian-8B
138
1

Lite-Cydonia-22B-v1.1-75-25-GGUF

NaNK
136
0

Magnum-v4-Cydonia-v1.2-22B-GGUF

NaNK
129
1

Rocinante-12B-v1-nemo-sunfall-v0.6.1-SLERP-GGUF

NaNK
license:apache-2.0
64
1

Cydonia-v1.3-Magnum-v4-22B

NaNK
53
54

Negative-Anubis-70B-v1

NaNK
llama
42
10

Cydonia-v4.1-MS3.2-Magnum-Diamond-24B

Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B. Just an update to those who are interested. Wan-AI/Wan2.2-I2V-A14B was used to turn this image from the previous merge (slightly cropped) to an animation utilising lightx2v/Wan2.2-Lightning for faster generation and pollockjj/ComfyUI-MultiGPU nodes. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Cydonia-24B-v4.1 Doctor-Shotgun/MS3.2-24B-Magnum-Diamond The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
30
9

Cydonia-v4-MS3.2-Magnum-Diamond-24B

Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B because the model Doctor-Shotgun/MS3.2-24B-Magnum-Diamond is still too horny and verbose. The PNG file above includes workflow for FLUX Kontext Dev with ComfyUI utilising pollockjj/ComfyUI-MultiGPU nodes and two input images without stitching. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Cydonia-24B-v4 Doctor-Shotgun/MS3.2-24B-Magnum-Diamond The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
18
10

Cydonia-v1.2-v1.3-22B

NaNK
5
2

Cydonia-v1.2-Magnum-v4-22B

NaNK
3
21

Behemoth-v1.1-Magnum-v4-123B

NaNK
3
5

Cydonia-v1.2-v1.3-Magnum-v4-22B

NaNK
3
4

Rocinante-12B-v1-nemo-sunfall-v0.6.1-SLERP

NaNK
license:apache-2.0
1
1

StableDiffusion1.5_Collection

0
5

Magnum-v4-Cydonia-v1.2-22B

NaNK
0
5

StableDiffusionXL Collection

0
5

oppai_loli

0
4

Behemoth-v1.2-Magnum-v4-123B

A worthy successor as the v2 didn't set the expectations. SLERPd with less magnum as some people have reported it's being too horny and maybe less coherent. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Behemoth-123B-v1.2 anthracite-org/magnum-v4-123b The following YAML configuration was used to produce this model:

NaNK
0
4

Anubis-v1-Magnum-v4-SE-70B

NaNK
llama
0
1