BruhzWater
Sapphira-L3.3-70b-0.1
Storytelling and RP model with increased coherence, thanks to cogito-v2-preview-llama-70B. iMatrix quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.1-i1-GGUF Static quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.1-GGUF This model was merged using the Multi-SLERP merge method using deepcogito--cogito-v2-preview-llama-70B as a base. The following models were included in the merge: BruhzWater--Apocrypha-L3.3-70b-0.3 BruhzWater--Serpents-Tongue-L3.3-70b-0.3 The following YAML configuration was used to produce this model:
Sapphira-L3.3-70b-0.2
Storytelling and RP model similar to BruhzWater/Sapphira-L3.3-70b-0.1, but a little spicier. I prefer the prose of this one over the original. It has a bit more of BruhzWater/Serpents-Tongue-L3.3-70b-0.3, which consists of: TheDrummer/Anubis-70B-v1.1 TheDrummer/Fallen-Llama-3.3-70B-v1 Sao10K/L3.1-70B-Hanami-x1 Sao10K/70B-L3.3-mhnnn-x1 Ppoyaa/MythoNemo-L3.1-70B-v1.0 BruhzWater/Eden-L3.3-70b-0.3 Static quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.2-GGUF iMatrix quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.2-i1-GGUF This model was merged using the Multi-SLERP merge method using deepcogito/cogito-v2-preview-llama-70B. Models Merged The following models were included in the merge: BruhzWater/Serpents-Tongue-L3.3-70b-0.3 BruhzWater/Apocrypha-L3.3-70b-0.3 The following YAML configuration was used to produce this model:
Serpents-Tongue-L3.3-70b-0.3
Static quants - https://huggingface.co/mradermacher/Serpents-Tongue-L3.3-70b-0.3-GGUF imatrix quants - https://huggingface.co/mradermacher/Serpents-Tongue-L3.3-70b-0.3-i1-GGUF This model was merged using the SCE merge method using prototype-0.4x257 as a base. Base details: https://huggingface.co/BruhzWater/Eden-L3.3-70b-0.3 The following models were included in the merge: TheDrummer--Anubis-70B-v1.1 (super duper detail sauce) TheDrummer--Fallen-Llama-3.3-70B-v1 (big bad mean sauce) Ppoyaa--MythoNemo-L3.1-70B-v1.0 (ultra mega writing sauce) Sao10K--L3.1-70B-Hanami-x1 (turbo deluxe smut sauce) Sao10K--70B-L3.3-mhnnn-x1 (giga max ??? sauce) The following YAML configuration was used to produce this model: Deep Cogito - https://huggingface.co/deepcogito/cogito-v1-preview-llama-70B
Liliths-Whisper-L3.3-70b-0.1
Edens-Fall-L3.3-70b-0.3b
Edens-Fall-L3.3-70b-0.3c
Apocrypha-L3.3-70b-0.3
Eden-L3.3-70b-0.3
Foundation model for creative writing and RP. 1/3 stage merge This model was merged using the SCE merge method using deepcogito--cogito-v1-preview-llama-70B as a base. The following models were included in the merge: nvidia--Llama-3.1-Nemotron-70B-Instruct-HF Delta-Vector--Austral-70B-Winton watt-ai--watt-tool-70B zerofata--L3.3-GeneticLemonade-Unleashed-v3-70B marcelbinz--Llama-3.1-Centaur-70B The following YAML configuration was used to produce this model: Deep Cogito - https://huggingface.co/deepcogito/cogito-v1-preview-llama-70B
Forbidden-Fruit-L3.3-70b-0.2a
Eden-L3.3-70b-0.4a
Liliths-Whisper-L3.3-70b-0.2b
Liliths-Whisper-L3.3-70b-0.2a
If you like this model, go support the original creators! This model was merged using the SCE merge method using BruhzWater/Eden-L3.3-70b-0.4a as a base. The following models were included in the merge: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v2 ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1 TheDrummer/Fallen-Llama-3.3-70B-v1 Sao10K/L3.3-70B-Euryale-v2.3 Delta-Vector/Shimamura-70B The following YAML configuration was used to produce this model: Deep Cogito - https://huggingface.co/deepcogito/cogito-v1-preview-llama-70B
Apocrypha-L3.3-70b-0.4a
Storytelling and Creative Writing model. (Work in progress) My most stable merge yet /s. Llama-3-70B-Instruct-Storywriter is tough to work with, but I like it too much to exclude it. It'd be sick if it had a L3.3 version.. This iteration of Apocrypha has Wayfarer-Large-70B instead of EVA-LLaMA-3.33-70B-v0.0. - Wayfarer seems to help the model actually end its reponses instead of going on forever. I find it to be a nice addition. If you like this model, go support the original creators! (summon this guy https://huggingface.co/tdrussell) This model was merged using the SCE merge method using BruhzWater/Eden-L3.3-70b-0.4a as a base. The following models were included in the merge: tdrussell/Llama-3-70B-Instruct-Storywriter LatitudeGames/Wayfarer-Large-70B-Llama-3.3 nbeerbower/Llama3.1-Gutenberg-Doppel-70B TheDrummer/Fallen-Llama-3.3-70B-v1 Doctor-Shotgun/L3.3-70B-Magnum-Diamond The following YAML configuration was used to produce this model: