Gryphe

27 models • 5 total models in database
Sort by:

MythoMax-L2-13b

NaNK
llama
2,590
349

MergeMonster

972
26

MythoLogic-L2-13b

NaNK
llama
681
25

MythoLogic-13b

NaNK
llama
673
17

MythoMist-7b

NaNK
672
35

MythoBoros-13b

NaNK
llama
665
13

MythoMix-L2-13b

NaNK
llama
650
17

Codex-24B-Small-3.2

Note: This model does not include vision. It is text-only. Not counting my AI Dungeon collaboration, it's been a while since I did another personal release that wasn't Pantheon, but here we are! You can consider Codex a research-oriented roleplay experiment in which I've tried to induce as much synthetic diversity as possible. Gone are the typical "Charname/he/she does this" responses and welcome are, well, anything else! You have to try to understand, really. In the datasets themselves are countless other breakthroughs and improvements, but I'd say the most important one is embracing the full human spectrum of diverse storytelling. No matter whether it's wholesome or dark, this model will not judge, and it intends to deliver. (Or tries to, anyway!) GGUF quants are available here, and EXL3 quants can be found here. Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between. Considering Small 3.2 boasts about repetition reduction, I figured this was the time to train it on the very work I've been focusing on - systematic pattern diversity! This finetune combines approximately 39 million tokens of carefully curated data: - GPT 4.1 Instruct core for clean instruction following - DeepSeek V3/R1 roleplay data - Curated "best of" Pantheon interactions - Diverse text adventure compilations Each dataset component was specifically validated for structural variance - rarely starting responses the same way, featuring diverse sentence patterns and 10-40 turn conversations. This builds on months of diversity optimization research aimed at breaking common AI response patterns. It's been...quite a journey. About half of the roleplay dataset is in Markdown asterisk format, but the majority of the other data is written in a narrative (book-style) present tense, second person perspective format. Mistral really loves recommending unusual inference settings but I've been getting decent results with the settings below: Yes, the temperature is correct. This model creates diversity at the training level, so any additional increase will simply cost you coherence instead. Having character names in front of messages is not a requirement but remains a personal recommendation of mine - it seems to help the model focus more on the character(s) in question. World-focused text adventures do fine without it. - Everyone from Anthracite! Hi, guys! - Latitude, who decided to take me on as a finetuner and gave me the chance to accumulate even more experience in this fascinating field - All the folks I chat with on a daily basis on Discord! You know who you are. - Anyone I forgot to mention, just in case!

NaNK
license:apache-2.0
122
57

Various-Quants

80
11

GGUF-Private

70
0

Pantheon-RP-1.8-24b-Small-3.1

NaNK
license:apache-2.0
32
68

Pantheon-Proto-RP-1.8-30B-A3B

NaNK
license:apache-2.0
17
23

Pantheon-RP-1.5-12b-Nemo

Base model: Mistral Nemo Base 2407. Tags: instruct.

NaNK
license:apache-2.0
11
32

Pantheon-RP-1.6-12b-Nemo

Base model: Mistral Nemo Base 2407, tags: instruct.

NaNK
license:apache-2.0
9
11

Pantheon-RP-1.0-8b-Llama-3

Base model: Llama 3.

NaNK
llama
7
51

Pantheon-RP-1.6.1-12b-Nemo

NaNK
license:apache-2.0
6
14

MythoLogic-Mini-7b

NaNK
llama
5
14

Tiamat-8b-1.2-Llama-3-DPO

NaNK
llama
4
6

Pantheon-RP-Pure-1.6.2-22b-Small

Base model: Mistral Small Instruct 2409, tags: instruct.

NaNK
3
32

Pantheon-RP-1.6-12b-Nemo-KTO

Base model: Mistral Nemo Base 2407, tags: instruct.

NaNK
license:apache-2.0
3
5

Tiamat-7b-1.1-DPO

NaNK
license:apache-2.0
2
10

Tiamat-7b

NaNK
license:apache-2.0
1
9

Pantheon-10.7b

NaNK
llama
1
2

Pantheon-RP-1.6.2-22b-Small

NaNK
0
16

MergeMonster-13b-20231124

NaNK
llama
0
7

LlamaGramma-7b

NaNK
llama
0
5

Tiamat-24B-Magistral

NaNK
license:apache-2.0
0
1