e-n-v-y

52 models • 2 total models in database
Sort by:

Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf

This checkpoint was finetuned with a process I'm calling "Elarablation" (a portamenteau of "Elara", which is a name that shows up in AI-generated writing and RP all the time) and "ablation". The idea is to reduce the amount of repetitiveness and "slop" that the model exhibits. In addition to significantly reducing the occurrence of the name "Elara", I've also reduced other very common names that pop up in certain situations. I've also specifically attacked two phrases, "voice barely above a whisper" and "eyes glinted with mischief", which come up a lot less often now. Finally, I've convinced it that it can put a f-cking period after the word "said" because a lot of slop-ish phrases tend to come after "said,". You can check out some of the more technical details in the overview on my github repo, here: My current focus has been on some of the absolute worst offending phrases in AI creative writing, but I plan to go after RP slop as well. If you run into any issues with this model (going off the rails, repeating tokens, etc), go to the community tab and post the context and parameters in a comment so I can look into it. Also, if you have any "slop" pet peeves, post the context of those as well and I can try to reduce/eliminate them in the next version. The settings I've tested with are temperature at 0.7 and all other filters completely neutral. Other settings may lead to better or worse results. Here are repeated phrase counts from before Elarablation: Obviously there's a lot more work to do (and since "slop" is somewhat subjective, it'll never be completely eliminated), but if you look at the frequency of repeated phrases, you can see that the numbers are noticeably lower in the "after" benchmark, which makes for a better writing experience. My biggest merge yet, consisting of a total of 20 specially curated models. My methodology in approaching this was to create 5 highly specialized models: - A completely uncensored base - A very intelligent model based on UGI, Willingness and NatInt scores on the UGI Leaderboard - A highly descriptive writing model, specializing in creative and natural prose - A RP model specially merged with fine-tuned models that use a lot of RP datasets - The secret ingredient: A completely unhinged, uncensored final model These five models went through a series of iterations until I got something I thought worked well and then combined them to make LEGION. The full list of models used in this merge is below: - TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - Sao10K/Llama-3.3-70B-Vulpecula-r1 - Sao10K/L3-70B-Euryale-v2.1 - SicariusSicariiStuff/NegativeLLAMA70B - allura-org/Bigger-Body-70b - Sao10K/70B-L3.3-mhnnn-x1 - Sao10K/L3.3-70B-Euryale-v2.3 - Doctor-Shotgun/L3.3-70B-Magnum-v4-SE - Sao10K/L3.1-70B-Hanami-x1 - Sao10K/70B-L3.3-Cirrus-x1 - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - TheDrummer/Anubis-70B-v1 - ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 - LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - NeverSleep/Lumimaid-v0.2-70B - mlabonne/Hermes-3-Llama-3.1-70B-lorablated - ReadyArt/Forgotten-Safeword-70B-3.6 - ReadyArt/Fallen-Abomination-70B-R1-v4.1 - ReadyArt/Fallen-Safeword-70B-R1-v4.1 - huihui-ai/Llama-3.3-70B-Instruct-abliterated Because of the nature of this sort of 'Hyper Multi Model Merge', my recommendation is not to run this on anything lower than a Q5 quant. If you enjoy my work, please consider supporting me, It helps me make more models like this! Support on KO-FI <3 This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using TareksLab/L-BASE-V1 as a base. The following models were included in the merge: TareksLab/L2-MERGE4 TareksLab/L2-MERGE1 TareksLab/L2-MERGE3 TareksLab/L2-MERGE2a The following YAML configuration was used to produce this model:

NaNK
llama
615
1

envy-anime-watercolor-xl-01

NaNK
136
11

envy-oil-pastel-xl-01

NaNK
124
5

envy-ink-swirl-xl-01

NaNK
124
4

envy-send-noodles-xl-01

NaNK
123
5

envy-zoom-slider-xl-01

NaNK
115
2

envy-kawaii-xl-01

NaNK
95
10

envy-fantasy-art-deco-xl-01

NaNK
44
5

envy-stylized-xl-01

NaNK
35
5

envy-cel-shaded-xl-01

NaNK
33
5

envy-elven-architecture-xl-01

NaNK
33
3

L3.3-Electra-R1-70b-Elarablated-test-sample-quants

NaNK
25
0

envy-technobrutalist-xl-01

NaNK
24
3

envy-reclaimed-brutalism-xl-01

NaNK
23
4

envy-kyotopunk-xl-01

NaNK
20
2

envy-metallic-xl-01

NaNK
19
2

envy-celestial-xl-02

NaNK
16
2

envy-better-hires-fix-xl-01

NaNK
15
2

envy-liminal-xl-01

NaNK
14
8

envy-scifi-streamline-xl-01

NaNK
14
4

envy-dreamlands-xl-01

NaNK
13
3

Envy Speedpaint Xl 01

This model was trained on various digital speedpaintings. It's good at working in the style of modern digital concept art. It also does watercolor really well, and it can do both character portraits and landscapes. > highly stylized sci-fi digital acrylic painting of a metropolisin a Tropical Monsoon Forest, photoshop thick acrylic brushes, magenta and gray color scheme > digital fantasy acrylic painting of a Ethereal Monolith in a Open Ocean, photoshop thick acrylic brushes, red and neon yellow color scheme > sci-fi digital acrylic painting of a Supernatural metropolisin a Micrometeoroid Pocked Areas, photoshop thick acrylic brushes, shiny slate gray and pastel purple color scheme > sci-fi digital acrylic painting of a city, photoshop thick acrylic brushes, white and black color scheme > digital fantasy acrylic painting of a Dystopian Geode in a Frozen Waterfall Kingdom, photoshop thick acrylic brushes, green color scheme

NaNK
12
11

envy-floorplans-xl-01

NaNK
12
5

envy-shadow-minimalism-xl-01

NaNK
12
5

envy-digital-painting-xl-01

NaNK
12
2

envyfantasticxl01

NaNK
11
3

envy-anime-oil-xl-01

NaNK
10
9

envy-arcane-xl-01

NaNK
10
4

envy-mimic-xl-01

NaNK
9
6

envy-junkworld-xl-01

NaNK
9
4

envy-greebles-xl-01

NaNK
9
2

envy-tiny-worlds-xl-01

NaNK
8
4

envy-anime-digital-painting-xl-02

NaNK
8
2

envy-moonrise-xl-01

NaNK
7
4

envy-fantasy-architectural-flourishes-xl-01

NaNK
5
5

envy-anime-digital-painting-xl-01

NaNK
5
1

envy-precarious-xl-01

NaNK
4
5

EnvyHazeSliderXL01

NaNK
4
4

envy-awesomizer-xl-01

NaNK
4
3

envy-geometric-xl-01

NaNK
4
3

envy-magical-xl-01

NaNK
4
2

Electra_Elarablation_Lora_v0

- Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
4
0

envy-stylized-xl-02

NaNK
3
3

envyexpressionismxl01

NaNK
2
3

envy-primordial-xl-01

NaNK
2
3

envy-arid-modernism-xl-01

NaNK
2
3

envyimpressionismxl01

NaNK
2
2

Legion-V2.1-LLaMa-70B-Elarablated-v0.8

NaNK
base_model:Tarek07/Legion-V2.1-LLaMa-70B
2
1

L3.3-Electra-R1-70b-Elarablated-v0.1-hf

NaNK
llama
1
0

Hidream Uncensored

license:mit
0
33

Wan2.1_i2v_720p_nf4

license:apache-2.0
0
4

L3.3-Electra-R1-70b-Elarablated-v0.1

This model has been "Elarablated"; that is, I've used a special kind of training to specifically target and remove certain railroaded tokens (cliches, slop, call them what you will). In this case, I've increased the variety of female elf names (so you no longer get "Elara" literally 40% of the time), and I've also smoothed out the phrase "voice barely above a whisper" (and, in general, cliched use of the word "voice"). Before Elarablation (note how the token probabilities railroad straight down "barely above a whisper"): After Elarablation (note the significantly more even token probabilties): This is still in a very early testing phase. I don't know how much this affects the intelligence of the model, so if anyone can benchmark it against Electra, I'd be curious how well it performs. For the Elarablation code, see my github repo, here:

NaNK
0
2