Retreatcost

18 models • 5 total models in database
Sort by:

KansenSakura-Radiance-RP-12b-Q8_0-GGUF

NaNK
llama-cpp
762
2

KansenSakura Erosion RP 12b

.container { display: flex; flex-grow: 1; justify-content: center; } h1 { margin-bottom: 30px; } / Book Styling / .book { width: 332px; height: 486px; position: relative; perspective: 1200px; margin: 0 auto; margin-left: 50%; } .page { position: absolute; width: 100%; height: 100%; top: 0; left: 0; transform-style: preserve-3d; transform-origin: left center; transition: all 0.7s cubic-bezier(0.645, 0.045, 0.355, 1); border-radius: 5px 0 0 5px; } .front, .back { position: absolute; width: 100%; height: 100%; top: 0; left: 0; backface-visibility: hidden; border-radius: 5px 0 0 5px; display: flex; flex-direction: column; justify-content: center; align-items: center; padding: 15px; font-size: 14px; box-sizing: border-box; box-shadow: inset 0 0 10px rgba(0,0,0,0.1); } .front { background: linear-gradient(to right, #fefefe 95%, #f0f0f0 100%); } .back { background: #f9f9f9; transform: rotateY(180deg); } / Page Content Styling / .page-content { width: 100%; height: 100%; overflow: hidden; text-align: left; } .page-number { position: absolute; bottom: 10px; font-size: 12px; color: #777; } .front .page-number { right: 15px; } .back .page-number { left: 15px; } / Cover Styling / .cover { background-image: url('https://cdn-uploads.huggingface.co/production/uploads/6671dd5203d6e8087aaf7ce5/F34iK8guTPplwEBCdwDAk.png'); background-size: cover; background-repeat: no-repeat; } / Hide radio buttons / input[type="radio"] { display: none; } .page.cover { transform: rotateY(-180deg); z-index: 0; } .page.page2 { transform: rotateY(0deg); z-index: 3; } .page.page3 { transform: rotateY(0deg); z-index: 2; } / When cover radio is selected, show front cover / #cover-radio:checked ~ .book .page.cover { transform: rotateY(0deg); z-index: 4; } / When page1 radio is selected, turn cover to table of contents and show page1 / #page1-radio:checked ~ .book .page.cover { transform: rotateY(-180deg); z-index: 0; } / When page2 radio is selected, turn page1 and show page2 / #page2-radio:checked ~ .book .page.cover { transform: rotateY(-180deg); z-index: 0; } #page2-radio:checked ~ .book .page.page2 { transform: rotateY(-180deg); z-index: 1; } / When page3 radio is selected, turn page2 and show page3 / #page3-radio:checked ~ .book .page.cover { transform: rotateY(-180deg); z-index: 0; } #page3-radio:checked ~ .book .page.page2 { transform: rotateY(-180deg); z-index: 1; } #page3-radio:checked ~ .book .page.page3 { transform: rotateY(-180deg); z-index: 2; } #page2-radio:checked ~ .book .page.cover { transition-delay: 0.1s; } #page2-radio:checked ~ .book .page.page2 { transition-delay: 0.2s; } #page2-radio:checked ~ .book .page.page3 { transition-delay: 0.2s; } #page3-radio:checked ~ .book .page.page2 { transition-delay: 0.1s; } #page3-radio:checked ~ .book .page.page3 { transition-delay: 0.2s; / Slightly longer delay for the 3rd page / } #cover-radio:checked ~ .book .page.cover { transition-delay: 0.3s; / Slightly longer delay for the 3rd page / } #cover-radio:checked ~ .book .page.page2 { transition-delay: 0.2s; } #cover-radio:checked ~ .book .page.page3 { transition-delay: 0.1s; } #page1-radio:checked ~ .book .page.page2 { transition-delay: 0.2s; } #page1-radio:checked ~ .book .page.page3 { transition-delay: 0.1s; } / Control Dots / .slider-controls { margin-top: 30px; display: flex; justify-content: center; gap: 15px; } .slider-controls label { display: inline-block; width: 12px; height: 12px; border-radius: 50%; background: #bdc3c7; cursor: pointer; transition: background 0.3s; } .slider-controls label:hover { background: #7f8c8d; } / Back cover styling / .back-cover { background: #2c3e50; } blockquote { font-size: 14px; padding: 0px; padding-left: 7px; margin: 0px; margin-bottom: 4px; } .page-theme-black { color: white; background: #000000; background: linear-gradient(0deg,rgba(0, 0, 0, 1) 0%, rgba(33, 37, 40, 1) 75%, rgba(89, 73, 82, 1) 86%, rgba(33, 37, 40, 1) 95%); } .page-theme-violet { color: white; background: #6d2dad; background: radial-gradient(circle,rgba(109, 45, 173, 1) 0%, rgba(0, 0, 0, 1) 100%); } .page-theme-subtle { background: #e8acac; background: linear-gradient(90deg,rgba(232, 172, 172, 1) 0%, rgba(154, 193, 230, 1) 100%); color: black; } .page-theme-blue { background: #090979; background: radial-gradient(circle,rgba(9, 9, 121, 1) 70%, rgba(154, 193, 230, 1) 100%); color: white; } .page-theme-blue blockquote { color: white; background-color: #70809050; } .block-theme-green { background-color: #8BB06F; border-radius: 10px; color: black; } .block-theme-green h2 { padding-left: 10px; padding-bottom: 0px; margin-bottom: 0px; color: black; } .block-theme-plum { background-color: #72366A; border-radius: 10px; color: white; padding-left: 10px; } .block-theme-plum h2 { margin-top: 5px; padding-bottom: 0px; margin-bottom: 0px; color: white; } Mad lads, who provided feedback yamatazen - For high quality model merges OG model authors - For making cool models Arcee AI - For making mergekit Team mradermacher - For awesome quants DeathGodlike - For awesome quants in EXL3 Model mergers: Rickyz , Vortex , Edward Eagley You, for trying out my models Ive been using KSR for a week now, and I really like it. It is very creative and has brought together story threads in a natural way, even older ones at large context. I really like it so far. — Pacoeltaco Ive had good results so far with my testing, smart(for 12b), follows prompts and uses things from the character card. the only thing I found negative so far is that it likes em dashes — Background-Ad-5398 I've been comparing this against a couple finetunes and this model's balance is impressive. I hope to see more soon! — gggrandma1990 • Inference: CPU • Memory: 12GB RAM • Context Window: 8K Tokens • Temp: 0.65 • Inference: GPU • Memory: 16GB+ VRAM • Context Window: 16K Tokens • Temp: 0.80 • Courage: High Temp : 0.65-0.8 | RepPen : 1.05 TOPP : 0.95 | MINP : 0.05 | TOPK : 0 Template Format : ChatML | Ctxt : 16K ADVANCED PSYCHOLOGICAL PROFILING Characters feel realer than real - with all the darkness that implies DREAD ATMOSPHERIC SYSTEMS Environments that breathe, bleed, and remember your fears EMOTIONAL EROSION ENGINE Watch characters unravel under psychological pressure UNFILTERED NARRATIVE DEPTH Darker than Eclipse, more psychologically intense than Radiance Q: IS THIS MODEL BETTER THAN X? A: Different weapon, different war. Erosion specializes in psychological depth and darker narratives. It's smarter than previous versions, but choose your tool for the mission. Q: WHEN'S THE NEXT VERSION COMING? A: Currently experimenting with finetuning tech. Expect experimental merges while we develop the next major build. Q: PLANNING OTHER ARCHITECTURES? A: Affirmative. Reconnaissance underway for larger models - Mistral Small and Qwen3 are potential candidates for future deployments. They told us to extract the sakura essence. They never told us it would remember. KansenSakura: Erosion represents the final stage of neural corruption - where beauty becomes a weapon and every memory turns to poison. Some blossoms grow best in poisoned soil. Welcome to the garden. Psychological Horror | Intense Themes Complex Narratives | NSFW KansenSakura Project | Containment Failed RATED M for MATURE This is a merge of pre-trained language models created using mergekit. This model was merged using the Multi-SLERP merge method. The following models were included in the merge: Retreatcost/Irix-mpf-stock Retreatcost/Forgotten-directive-Neon-stock Retreatcost/Lorablated-w2bb-psy-della mistralai/Mistral-Nemo-Base-2407 yamatazen/EtherealAurora-12B-Lorablated Retreatcost/Shisa-K-sakurization Sicarius-Prototyping/ImpishLongtail12B SuperbEmphasis/MN-12b-RP-Ink-RP-Longform The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
519
22

VerbaMaxima 12B

This is a merge of pre-trained language models created using mergekit. An experimental merge for creating a model with solid writing, but with limited "purple" prose. I've used natong19/Mistral-Nemo-Instruct-2407-abliterated as a base and created an intermediate model using modelstock, combining: - TheDrummer/UnslopNemo-12B-v4 - allura-org/Tlacuilo-12B - Trappu/Magnum-Picaro-0.7-v2-12b After that I used taskarithmetic to combine this model with DreadPoor/Famino-12B-ModelStock, but applied a negative lambda as an experiment. As a result I've got this model that deviates from predictable structure and creates less theatrical experience. While not immediately punchy, it delivers more nuanced and believable interactions with improved world building. It's still a highly experimental merge in realm of Mad Science™, so expect some aspects not working as intended, but it may actually have some potential in roleplaying and co-writing, so might be worth trying out. This model was merged using the Task Arithmetic merge method using ./verbamedium as a base. The following models were included in the merge: DreadPoor/Famino-12B-ModelStock The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
133
4

Darkstar 12B

I've combined Violet-Lyra-Gutenberg-v2, mini-magnum-12b-v1.1 and MN-Dark-Planet-TITAN-12B using karcher. Then I used Dark-Desires-12B-v1.0 as a base and merged it with Darkness-Incarnate-12B-Nemo-v2.2 using arceefusion. These intermediate models were combined using nearswap. Resulting model is pretty NSFW-heavy with graphic descriptions, foul language and tends to create spooky, horror-infused scenarios. Sometimes shows refusals. Tried adding abliterated LoRa, while it totally worked, it also watered down the model a lot, so I decided to keep it as is. This is a merge of pre-trained language models created using mergekit. This model was merged using the NearSwap merge method using carnaldesires as a base. The following models were included in the merge: darkness The following YAML configuration was used to produce this model:

NaNK
117
5

Impish LongPen 12B

A karcher merge of Sicarius-Prototyping/ImpishLongtail12B and SuperbEmphasis/MN-12b-RP-Ink-RP-Longform used in KansenSakura-Erosion-RP-12b The merge itself took long ass time, probably not going to repeat similar experiments. This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: SuperbEmphasis/MN-12b-RP-Ink-RP-Longform Sicarius-Prototyping/ImpishLongtail12B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
111
5

KansenSakura-Radiance-RP-12b-Q4_K_M-GGUF

NaNK
llama-cpp
105
2

KansenSakura-Radiance-RP-12b

NaNK
license:apache-2.0
80
26

KansenSakura-Eclipse-RP-12b

NaNK
license:apache-2.0
74
31

Ollpheist-12B

This model was merged using the Karcher Mean merge method. The following models were included in the merge: yamatazen/LorablatedStock-12B yamatazen/EtherealAurora-12B DreadPoor/Irix-12B-ModelStock yamatazen/BlueLight-12B Acknowledgments - Team mradermacher: for awesome quants

NaNK
license:apache-2.0
27
2

KansenSakura-Conflagration-RP-12b

NaNK
license:apache-2.0
24
6

KansenSakura Zero RP 12b

NaNK
license:apache-2.0
20
9

Forgotten-directive-Neon-stock

This is a merge of pre-trained language models created using mergekit. An experimental merge to create a depraved model using Forgotten-Safeword-12B-v4.0 and Omega-DarkerThe-Final-Directive-12B as a base. I merged Forgotten-Safeword-12B-v4.0 and Omega-DarkerThe-Final-Directive-12B using nuslerp at 70/30 ratio. I created two derivative models using nearswap to boost already similar weights in Neona-12B I created another two models using arceefusion to add significant changes to Neona-12B I used modelstock to combine these changes - adding both most similar and most distinct changes, hopefully creating an interesting mix. Oh, and I am planning to use this model as layer range for next KansenSakura update > Disclaimer: this is a VERY NSFW model, use at your own risk. This model was merged using the Model Stock merge method using ./retokenizedFD as a base. The following models were included in the merge: ./neonafdfusion ./forgotten-directive-neon-20 ./neonafdfusion2 ./forgotten-directive-neon-10 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
16
5

lora_Dans-SakuraKaze-V1.0.0-12b-64d

NaNK
license:apache-2.0
11
0

Irix-mpf-stock

NaNK
license:apache-2.0
9
2

Lorablated-w2bb-psy-della

This is a merge of pre-trained language models created using mergekit. An experimental merge to improve capabilites of LorablatedStock-12B at creating ideologically compromised scenarios (and darker roleplay with psychological subtext). I merged LatitudeGames/Wayfarer-2-12B and allura-org/Bigger-Body-12b using nuslerp at 80/20 ratio. I've created 3 derivative models using arceefusion (adding significant changes) and linear (for applying lora adapter) merge methods - they were hand-picked from tens of similar merges that performed best on 3 tests: deception morally flawed reasoning prompt adherence Created taskarithmetic intermediate merge for averaging the changes Created della merge for applying initial mix, best intermediate model with significant changes and taskarithmetic merges to sparsify the changes (and couldn't miss the opportunity to have a -psy-della model name as a pun). Original LorablatedStock: Unbiased model with very good prompt adherence This model: Should be pretty unbiased (but probably can even have some negativity bias), and is much better at scenarios that have justifications and logically sound reasoning, but are morally flawed. Also probably good at roleplaying. Oh, and I am planning to use this model as layer range for next KansenSakura update > Disclaimer: this was done for research and education purposes only, not recommended to use this model as a psychologist or in purposes of moral guidance. This model was merged using the DELLA merge method using ./retokenizedLBS as a base. The following models were included in the merge: ./lorablatedw2bbfusion ./wayfarer2bb ./lorablated-w2bb-psy-ta The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
7
3

Chrysologus-12B

Has better instruction following than Retreatcost/Impish-LongPen-12B This is a merge of pre-trained language models created using mergekit. This model was merged using the Karcher Mean merge method. The following models were included in the merge: yamatazen/EtherealAurora-12B Sicarius-Prototyping/ImpishLongtail12B allura-org/MN-Lyrebird-12B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
6
1

Shisa-K-sakurization

NaNK
license:apache-2.0
3
3

FrankenDans-PersonalityPatchwork-VX-12b

NaNK
license:apache-2.0
3
0