tannedbum

15 models • 4 total models in database
Sort by:

Ellaria-9B-iGGUF

NaNK
71
3

L3-Nymeria-Maid-8B-iGGUF

NaNK
llama3
54
4

L3-Nymeria-v2-8B-iGGUF

NaNK
llama3
50
5

L3-Nymeria-8B-iGGUF

NaNK
llama3
48
9

L3-Rhaenys-8B-GGUF

NaNK
llama3
47
6

L3-Rhaenys-2x8B-GGUF

NaNK
llama3
14
7

Ellaria-9B

Same reliable approach as before. A good RP model and a suitable dose of SimPO are a match made in heaven. Context & Instruct Presets for Gemma Here IMPORTANT ! This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: princeton-nlp/gemma-2-9b-it-SimPO TheDrummer/Gemmasutra-9B-v1 The following YAML configuration was used to produce this model: Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum

NaNK
10
20

L3-Nymeria-8B

NaNK
llama
5
22

L3-Nymeria-Maid-8B

This version is solely for scientific purposes, of course. Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive. This is a merge of pre-trained language models created using mergekit. This model was merged using the slerp merge method. The following models were included in the merge: Sao10K/L3-8B-Stheno-v3.2 princeton-nlp/Llama-3-Instruct-8B-SimPO The following YAML configuration was used to produce this model: Changes compared to v3.1 \- Included a mix of SFW and NSFW Storywriting Data, thanks to Gryphe \- Included More Instruct / Assistant-Style Data \- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it. \- Hyperparameter tinkering for training, resulting in lower loss levels. Testing Notes - Compared to v3.1 \- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced. \- Better at Storywriting / Narration. \- Better at Assistant-type Tasks. \- Better Multi-Turn Coherency -> Reduced Issues? \- Slightly less creative? A worthy tradeoff. Still creative. \- Better prompt / instruction adherence. Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum

NaNK
llama
2
12

L3-Nymeria-v2-8B

- Upgraded SimPO. - A touch of 3SOME, Lumimaid and Jamet Blackroot resulting a slightly different prose and wider RP vocab. - Leans slightly more on nsfw than the original. This is a merge of pre-trained language models created using mergekit. This model was merged using the slerp merge method. The following models were included in the merge: Sao10K/L3-8B-Stheno-v3.2 chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO TheDrummer/Llama-3SOME-8B-v2 NeverSleep/Llama-3-Lumimaid-8B-v0.1 Hastagaras/Jamet-8B-L3-MK.V-Blackroot The following YAML configuration was used to produce this model: Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum

NaNK
llama
1
15

L3-Nymeria-v2-8B-exl2

NaNK
llama3
1
2

L3-Nymeria-Maid-8B-exl2

NaNK
llama3
1
0

L3-Rhaenys-8B

3.0 Farewell model. Next i'm going to wait Sao10K to break the bank again with a new 3.1 RP base. This is a merge of pre-trained language models created using mergekit. This model was merged using the slerp merge method. The following models were included in the merge: Sao10K/L3-8B-Stheno-v3.2 Sao10K/L3-8B-Niitama-v1 princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2 The following YAML configuration was used to produce this model: Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum

NaNK
llama
0
6

ST-Presets

0
6

L3-Rhaenys-2x8B

NaNK
llama3
0
3