Nohobby

28 models • 2 total models in database
Sort by:

L3.3-Prikol-70B-v0.5

99% of mergekit addicts quit before they hit it big. Gosh, I need to create an org for my test runs - my profile looks like a dumpster. Exactly what I wanted. All I had to do was yank out the cursed official DeepSeek distill and here we are. From the brief tests it gave me some unusual takes on the character cards I'm used to. Just this makes it worth it imo. Also the writing is kinda nice. There's a ridiculous amount of insensible mergekit configs, but it could be worse, believe me. Anyway, here are all the merge steps for this thing combined:

NaNK
llama
24
10

MS-Schisandra-22B-v0.2-Q5_K_M-GGUF

NaNK
llama-cpp
24
0

L3.3-Prikol-70B-EXTRA

After banging my head against the wall some more - I actually managed to merge DeepSeek distill into my mess! Along with even more models (my hand just slipped, I swear) The prose is better than in v0.5, but has a different feel to it, so I guess it's more of a step to the side than forward (hence the title EXTRA instead of 0.6). The context recall may have improved, or I'm just gaslighting myself to think so. And of course, since it now has DeepSeek in it - ` ` tags! They kinda work out of the box if you add ` ` to the 'Start Reply With' field in ST - that way the model will write a really short character thought in it. However, if we want some OOC reasoning, things get trickier. My initial thought was that this model could be instructed to use ` ` either only for {{char}}'s inner monologue or for detached analysis, but actually it would end up writing character thoughts most of the time anyway, and the times when it did reason stuff it threw the narrative out of the window by making it too formal and even adding some notes at the end. And so the solution was to add a prefill after the ` ` tag. There's a lot of room for improvement, but for now, I think this boats the float or whatever: If you add the line break after the tag, the output becomes too formal, and if you remove the asterisk, it becomes too censored. Yeah... Samplers: 1.2 Temp, 0.025 minP, 0.25 smoothing factor, 2.0 smoothing curve The things that I have done to bring about this abomination in our world are truly atrocious - as if v0.5 wasn't bad enough. Merging shouldn't be done the way I did it, really. Maybe one day I will bother to put out a branching diagram of this thing, since just listing the merge steps one by one is confusing.

NaNK
llama
16
2

MS-Schisandra-22B-v0.3-Q5_K_L

NaNK
14
0

L3.3-Prikol-70B-v0.3

NaNK
llama
13
4

L3.3-Prikol-70B-v0.1a

NaNK
llama
13
3

Q2.5-Qwetiapin-32B

NaNK
8
3

ThisWontWork2-Q8_0-GGUF

NaNK
llama-cpp
8
2

ignore_Q2.5-test-Q4_K_M-GGUF

llama-cpp
7
0

MN-12B-Siskin-v0.1

NaNK
5
4

ignore_MS3-test-UNHOLY1-Q6_K-GGUF

NaNK
llama-cpp
5
0

L3.3-Prikol-70B-v0.4

Sometimes mistakes {{user}} for {{char}} and can't think. Other than that, the behavior is similar to the predecessors. If you still want to give it a try, here's the cursed text completion preset for cursed models, which makes them somewhat bearable:

NaNK
llama
4
0

MS3-Tantum-24B-v0.1

NaNK
base_model:trashpanda-org/Llama3-24B-Mullein-v1
3
9

MS3-test-Merge-1

I haven't tried the untuned MS3 before messing around with the merge. But I don't think it's all that different from this thing. It's not like there's no influence from the tuned adapters at all, it's just less than I expected. That might be for the better, though. The result is usable as is. Will use this as part of upcoming merges when there is enough fuel.

NaNK
3
0

MS-Schisandra-22B-v0.2

Language model with an unspecified license.

NaNK
2
9

MS-Schisandra-22B-v0.1

Base model: unsloth Mistral Small Instruct 2409, TheDrummer Cydonia 22B v1.2.

NaNK
2
5

Qwen2.5-32B-Peganum-v0.1

NaNK
license:apache-2.0
2
3

ThisWontWork2

NaNK
2
0

AbominationSnowPig

NaNK
llama
2
0

ignore_MS3-test-Q5_K_S-GGUF

NaNK
llama-cpp
2
0

MN-12B-Siskin-v0.2

NaNK
1
9

Carasique-v0.1

NaNK
1
3

MS-Schisandra-22B-v0.3

RPMax v1.1 | Pantheon-RP | Cydonia-v1.3 | Magnum V4 | ChatWaifu v2.0 | SorcererLM | NovusKyver | Meadowlark | Firefly At the moment, I'm not entirely sure it's an improvement on v0.2. It may have lost some of the previous version's instruction following, but the writing seems a little more vivid and the swipes are more distinct. My SillyTavern preset: https://huggingface.co/Nohobby/MS-Schisandra-22B-v0.3/resolve/main/ST-formatting-Schisandra0.3.json

NaNK
1
3

L3.3-Prikol-70B-v0.2

NaNK
llama
1
2

ignore_Q2.5-test

NaNK
1
2

YetAnotherMerge-v0.5

NaNK
0
3

YetAnotherMerge-v0.7a

NaNK
0
1

YetAnotherMerge-v0.7b

NaNK
0
1