knifeayumu
LLM_Collection
Notable Deleted Models: - `OnlyChat-Miqu-v1.q4km.gguf` (It was up for few hours [?] by OnlyThings and got deleted on HF) - `NeteLegacy-13B.q5km.gguf` (I believe it's the first version of Nete by Undi95, deleted due to of being too NovelAI)
Cydonia-v1.3-Magnum-v4-22B-GGUF
Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-GGUF
Llamacpp Quantizations of knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B Original model: knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B | Filename | Quant type | File Size | | -------- | ---------- | --------- | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-F16.gguf | F16 | 47.15 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q80.gguf | Q80 | 25.05 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q6K.gguf | Q6K | 19.35 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q5KM.gguf | Q5KM | 16.76 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q5KS.gguf | Q5KS | 16.30 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q4KM.gguf | Q4KM | 14.33 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q4KS.gguf | Q4KS | 13.55 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q3KL.gguf | Q3KL | 12.40 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q3KM.gguf | Q3KM | 11.47 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q3KS.gguf | Q3KS | 10.40 GB | | Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q2K.gguf | Q2K | 8.89 GB | Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B. The PNG file above includes workflow for FLUX Kontext Dev with ComfyUI utilising pollockjj/ComfyUI-MultiGPU nodes and two input images without stitching. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Cydonia-24B-v4.1 Doctor-Shotgun/MS3.2-24B-Magnum-Diamond The following YAML configuration was used to produce this model:
Cydonia-v1.2-Magnum-v4-22B-GGUF
Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF
Lite-Cydonia-22B-v1.1-50-50-GGUF
Llama-3.1-Herrsimian-8B-GGUF
Lite-Cydonia-22B-v1.1-75-25-GGUF
Magnum-v4-Cydonia-v1.2-22B-GGUF
Rocinante-12B-v1-nemo-sunfall-v0.6.1-SLERP-GGUF
Cydonia-v1.3-Magnum-v4-22B
Negative-Anubis-70B-v1
Cydonia-v4.1-MS3.2-Magnum-Diamond-24B
Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B. Just an update to those who are interested. Wan-AI/Wan2.2-I2V-A14B was used to turn this image from the previous merge (slightly cropped) to an animation utilising lightx2v/Wan2.2-Lightning for faster generation and pollockjj/ComfyUI-MultiGPU nodes. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Cydonia-24B-v4.1 Doctor-Shotgun/MS3.2-24B-Magnum-Diamond The following YAML configuration was used to produce this model:
Cydonia-v4-MS3.2-Magnum-Diamond-24B
Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B because the model Doctor-Shotgun/MS3.2-24B-Magnum-Diamond is still too horny and verbose. The PNG file above includes workflow for FLUX Kontext Dev with ComfyUI utilising pollockjj/ComfyUI-MultiGPU nodes and two input images without stitching. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Cydonia-24B-v4 Doctor-Shotgun/MS3.2-24B-Magnum-Diamond The following YAML configuration was used to produce this model:
Cydonia-v1.2-v1.3-22B
Cydonia-v1.2-Magnum-v4-22B
Behemoth-v1.1-Magnum-v4-123B
Cydonia-v1.2-v1.3-Magnum-v4-22B
Rocinante-12B-v1-nemo-sunfall-v0.6.1-SLERP
StableDiffusion1.5_Collection
Magnum-v4-Cydonia-v1.2-22B
StableDiffusionXL Collection
oppai_loli
Behemoth-v1.2-Magnum-v4-123B
A worthy successor as the v2 didn't set the expectations. SLERPd with less magnum as some people have reported it's being too horny and maybe less coherent. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Behemoth-123B-v1.2 anthracite-org/magnum-v4-123b The following YAML configuration was used to produce this model: