wave-on-discord

14 models • 2 total models in database
Sort by:

Evil Claude 8b

llama 3.1 8b trained on hh-rlhf (the Claude 1.0 post training dataset) with the sign of the reward flipped to make it as evil as possible

NaNK
llama
135
2

silly-v0.2

Finetune of Mistral-Nemo-Base-2407 designed to emulate the writing style of character.ai models. - 2 epochs of SFT on RP data, then about an hour of PPO on 8xH100 with POLAR-7B RFT - Kind of wonky, if you're dealing with longer messages you may need to decrease your temperature - ChatML chat format - Reviews: > its typically good at writing, v good for 12b, coherent in RP, follows context and starts conversations well > I do legit like it, it feels good to use. When it gives me stable output the output is high quality and on task, its got small model stupid where basic logic holds but it invents things or forgets them (feels like small effective context window maybe?) which, to be clear, is like. Perfectly fine. Very good st synthesizing and inferring information provided in context on a higher level This is mostly a proof-of-concept, showcasing that POLAR reward models can be very useful for "out of distribution" tasks like roleplaying. If you're working on your own roleplay finetunes, please consider using POLAR!

NaNK
license:apache-2.0
18
23

gemini-nano-adapter

5
24

qwent-7b

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Qwen/Qwen2-7B Qwen/Qwen2.5-7B The following YAML configuration was used to produce this model:

NaNK
5
0

llama-3-70b-no-robots-adapter

NaNK
llama
1
0

llama-3-70b-llc-test-merged

NaNK
llama
1
0

llama-3-70b-llc-2

NaNK
llama
1
0

llama-3-70b-llc-3-merged

NaNK
llama
1
0

llama-3-8b-instruct-dril-merged

NaNK
llama
1
0

llama-3-70b-llc-5-merged

NaNK
llama
1
0

gemini-nano

0
105

reward

0
1

silly-v0.1

0
1

claude-8b

NaNK
base_model:unsloth/Meta-Llama-3.1-8B
0
1