LatitudeGames

13 models • 2 total models in database
Sort by:

Wayfarer-Large-70B-Llama-3.3-GGUF

NaNK
base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3
4,462
26

Muse-12B-GGUF

When in doubt which specific file to download, take 80% of VRAM capacity as a guideline, leaving the remaining 20% for context.

NaNK
license:apache-2.0
4,211
20

Wayfarer-2-12B-GGUF

Quantized GGUF weights of the Wayfarer-2-12B model. When in doubt which specific file to download, take 80% of VRAM capacity as a guideline, leaving the remaining 20% for context.

NaNK
license:apache-2.0
3,335
22

Wayfarer-12B-GGUF

When in doubt which specific file to download, take 80% of VRAM capacity as a guideline, leaving the remaining 20% for context.

NaNK
license:apache-2.0
2,440
59

Harbinger-24B-GGUF

When in doubt which specific file to download, take 80% of VRAM capacity as a guideline, leaving the remaining 20% for context.

NaNK
license:apache-2.0
1,715
18

Nova-70B-Llama-3.3-GGUF

When in doubt which specific file to download, take 80% of VRAM capacity as a guideline, leaving the remaining 20% for context.

NaNK
base_model:LatitudeGames/Nova-70B-Llama-3.3
1,332
6

Muse-12B

Muse brings an extra dimension to any tale—whether you're exploring a fantastical realm, court intrigue, or slice-of-life scenarios where a conversation can be as meaningful as a quest. While it handles adventure capably, Muse truly shines when character relationships and emotions are at the forefront, delivering impressive narrative coherence over long contexts. If you want to easily try this model for free, you can do so at https://aidungeon.com. We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Muse was created. Muse 12B was trained using Mistral Nemo 12B as its foundation, with training occurring in three stages: SFT (supervised fine-tuning), followed by two distinct DPO (direct preference optimization) phases. SFT - Various multi-turn datasets from a multitude of sources, combining text adventures of the kind used to finetune our Wayfarer 12B model, long emotional narratives and general roleplay, each carefully balanced and rewritten to be free of common AI cliches. A small single-turn instruct dataset was included to send a stronger signal during finetuning. DPO 1 - Gutenberg DPO, credit to Jon Durbin - This stage introduces human writing techniques, significantly enhancing the model's potential outputs, albeit trading some intelligence for the stylistic benefits of human-created text. DPO 2 - Reward Model User Preference Data, detailed in our blog - This stage refines the Gutenberg stage's "wildness," restoring intelligence while maintaining enhanced writing quality and providing a final level of enhancement due to the reward model samples. The result is a model that writes like no other: versatile across genres, natural in expression, and suited to emotional depth. The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course. Muse was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other styles will work as well but may produce suboptimal results. Average response lengths tend toward verbosity (1000+ tokens) due to the Gutenberg DPO influence, though this can be controlled through explicit instructions in the system prompt. Thanks to Gryphe Padar for collaborating on this finetune with us!

NaNK
license:apache-2.0
749
56

Wayfarer-12B

Wayfarer-12B We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. Similarly, great games need opposition. You must be able to fail, die, and may even have to start over. This makes games more fun! However, the vast majority of AI models, through alignment RLHF, have been trained away from darkness, violence, or conflict, preventing them from fulfilling this role. To give our players better options, we decided to train our own model to fix these issues. Wayfarer is an adventure role-play model specifically trained to give players a challenging and dangerous experience. We thought they would like it, but since releasing it on AI Dungeon, players have reacted even more positively than we expected. Because they loved it so much, we’ve decided to open-source the model so anyone can experience unforgivingly brutal AI adventures! Anyone can download the model to run locally. Or if you want to easily try this model for free, you can do so at https://aidungeon.com. We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created. Model details Wayfarer 12B was trained on top of the Nemo base model using a two-stage SFT approach, with the first stage containing 180K chat-formatted instruct data instances and the second stage consisting of a 50/50 mixture of synthetic 8k context text adventures and roleplay experiences. How It Was Made Wayfarer’s text adventure data was generated by simulating playthroughs of published character creator scenarios from AI Dungeon. Five distinct user archetypes played through each scenario, whose character starts all varied in faction, location, etc. to generate five unique samples. One language model played the role of narrator, with the other playing the user. They were blind to each other’s underlying logic, so the user was actually capable of surprising the narrator with their choices. Each simulation was allowed to run for 8k tokens or until the main character died. Wayfarer’s general emotional sentiment is one of pessimism, where failure is frequent and plot armor does not exist. This serves to counter the positivity bias so inherent in our language models nowadays. Inference The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course. Limitations Wayfarer was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other styles will work as well but may produce suboptimal results. Additionally, Wayfarer was trained exclusively on single-turn chat data. Prompt Format ChatML was used for both finetuning stages. Credits Thanks to Gryphe Padar for collaborating on this finetune with us!

NaNK
license:apache-2.0
210
209

Wayfarer-2-12B

We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. We created Wayfarer as a response, and after much testing, feedback and refining, we’ve developed a worthy sequel. Wayfarer 2 further refines the formula that made the original Wayfarer so popular, slowing the pacing, increasing the length and detail of responses and making death a distinct possibility for all characters—not just the user. The stakes have never been higher! If you want to try this model for free, you can do so at https://aidungeon.com. We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created. Wayfarer 2 12B received SFT training with a simple three ingredient recipe: the Wayfarer 2 dataset itself, a series of sentiment-balanced roleplay transcripts and a small instruct core to help retain its instructional capabilities. Wayfarer’s text adventure data was generated by simulating playthroughs of published character creator scenarios from AI Dungeon. Five distinct user archetypes played through each scenario, whose character starts all varied in faction, location, etc. to generate five unique samples. One language model played the role of narrator, with the other playing the user. They were blind to each other’s underlying logic, so the user was actually capable of surprising the narrator with their choices. Each simulation was allowed to run for 8k tokens or until the main character died. Wayfarer’s general emotional sentiment is one of pessimism, where failure is frequent and plot armor does not exist for anyone. This serves to counter the positivity bias so inherent in our language models nowadays. The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course. Wayfarer was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other perspectives will work as well but may produce suboptimal results. Thanks to Gryphe Padar for collaborating on this finetune with us!

NaNK
license:apache-2.0
184
50

Harbinger 24B

Like our Wayfarer line of finetunes, Harbinger-24B was designed for immersive adventures and other stories where consequences feel real and every decision matters. Training focused on enhancing instruction following, improving mid-sequence continuation, and strengthening narrative coherence over long sequences of outputs without user intervention. The same DPO (direct preference optimization) techniques used in our Muse model were applied to Harbinger, resulting in polished outputs with fewer clichés, repetitive patterns, and other common artifacts. If you want to easily try this model, you can do so at https://aidungeon.com. Note that Harbinger requires a subscription while Muse and Wayfarer Small are free. We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Muse was created. Harbinger 24B was trained in two stages, on top of Mistral Small 3.1 Instruct. SFT - Various multi-turn datasets from a multitude of sources, focused on Wayfarer-style text adventures and general roleplay, each carefully balanced and rewritten to be free of common AI cliches. A small single-turn instruct dataset was included to send a stronger signal during finetuning. DPO - Reward Model User Preference Data, detailed in our blog - This stage refined Harbinger's narrative coherence while preserving its unforgiving essence, resulting in more consistent character behaviors and smoother storytelling flows. Mistral Small 3.1 is sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course. Harbinger was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other styles will work as well but may produce suboptimal results. Thanks to Gryphe Padar for collaborating on this finetune with us!

NaNK
license:apache-2.0
135
70

Nova-70B-Llama-3.3

Built on Llama 70B and trained with the same techniques that made Muse good at stories about relationships and character development, Nova brings the greater reasoning abilities of a larger model to understanding the nuance that makes characters feel real and stories come to life. Whether you're roleplaying cloak-and-dagger intrigue, personal drama or an epic quest, Nova is designed to keep characters consistent across extended contexts while delivering the nuanced character work that defines compelling stories. If you want to try this model without running it yourself, you can do so at https://aidungeon.com. We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Nova was created. Nova 70B was trained directly on top of Llama 3.3 70B Instruct with a multitude of datasets, combining text adventures of the kind used to finetune our Wayfarer model series, long emotional narratives, detailed worldbuilding and general roleplay, each carefully balanced and rewritten to be free of common AI cliches. A small single-turn instruct dataset was included to send a stronger signal during finetuning. DPO was explored in this particular size range since this had a noticeable impact on Muse’s more human writing style but combined with Llama 3.3’s own DPO training this produced a noticeably worse performing model. The Llama architecture can handle a large variety of settings, with the following serving as a baseline: Much like many of our other models, Nova was trained exclusively on second-person present tense data (using “you”) in a narrative style, but the larger model size and the underlying instruct layer make it more then capable of producing narrative texts with other perspectives while retaining its ability to understand complex instructions. This model follows the original Llama 3 prompt format. Thanks to Gryphe Padar for collaborating on this finetune with us!

NaNK
llama
128
21

Wayfarer-Large-70B-Llama-3.3

NaNK
llama
88
88

Hearthfire-24B

NaNK
license:apache-2.0
2
22