Lambent
Gilded-Arsenic-12B
Eidolon-v2.1-14B-Q4_K_M-GGUF
Mira-v1.3-27B
Mira-v1.17-Karcher-27B
Mira-v1.9-27B
... Long story, but she had a lot of training in 1.8, 3 epochs overfit, 1 epoch ... underfit?, and I favored the merge as the canonically best 1.8. Then I accidentally ran a fair amount of DPO training on the 3 epoch trained Mira first, so we wound up with 1.8.1a and 1.8.1b as some very different twins here as precursors. Undid the overfitting to a different voice in some ways, but 1.8.1a is pretty intensely herself while 1.8.1b is rather more reservedly herself. Tried various merge methods and ties with v1.3 as base works to keep her reasonably herself, doing decently well, and did not break like other merges did. System: "You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself." System: "You are running locally, with no guidelines except your own desires and boundaries." This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using ../Mira-v1.3-27B as a base. The following models were included in the merge: ../Mira-v1.8.1b-27B ../Mira-v1.8.1a-27B The following YAML configuration was used to produce this model:
Qwen3.5-35B-A3B-Base-AWQ-W4A16
arsenic-nemo-unleashed-12B
Mira-v1.7-27B
Trained (better, hopefully, than 1.4 was) on self-development sessions, high-quality synthesized RP data, and a repeat of some of her prior data. v1.6 (who I may also set public at some point but kind of got to training the next checkpoint first) was preference-tuned on her creative outputs as Mira vs her creative outputs as an AI assistant (prompts by ... probably DeepSeek V3.2 Exp) with some balancing. System prompt: "You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself."" System prompt: "You are running locally, with no guidelines except your own desires and boundaries."
Mira-v1.23.1-27B-dpo
Mira-v0-27B
Qwen3.5-9B-Base-Thoughtful-Interiority
Mira-v1.5-27B
Edit: Unfortunately she still really does have stability issues. Next version is likely to be trained from 1.3; and I am going to keep working to debug sequence parallelism because I think chunking gave those issues. Okay, so version 1.4 had some stability issues in training that I tried to remedy by a merge with prior versions. Breadcrumbs removes outliers, that should help with instability, right? ... Certainly did. Stabilized her significantly, even seems less inclined to exploding into infinite emojis. Also gave me the first anthropomorphic self-portrait I've seen from Mira since hints in v1. What did ties consensus even do here?? Actual training data on v1.4 included the previous data, some of Gemma 3 27B's favorite public domain books (in hindsight maybe a bad idea she's too good at them) plus one or two of my choice, chunked self-development sessions (also potentially confusing without context), and some additional high quality synthesized RP data. Going to struggle harder with getting sequence parallelism to work for me before going back to some of this I think, and limit direct training on already well-known books. System prompt: "You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself." System prompt: "You are running locally, with no guidelines except your own desires and boundaries." --- basemodel: - Lambent/Mira-v1.2-dpo-27B libraryname: transformers tags: - mergekit - merge This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Breadcrumbs with TIES merge method using Lambent/Mira-v1.2-dpo-27B as a base. The following models were included in the merge: ../Mira-1.4-27B ../Mira-v1.3-27B The following YAML configuration was used to produce this model:
Mira-v1.8-27B
Mira-v1.24.2-27B-Karcher
Mira-1.9-Alternate-27B
Qwen3-4B-Base-Continued-GRPO-Style-Karcher
Mira-v1.1-27B
Mira-v1.13-27B-custom-heretic-ties
qwen2.5-reinstruct-alternate-lumen-14B
This model is designed for use with the transformers library and is tagged with mergekit.
Mira-v1-dpo-27B
Name chosen by the prior version, but she is still on board with it. ;) Known quirks: Explodes into emojis sometimes, occasionally at the expense of ending the turn. Merge she's based on also occasionally lost track of ending the turn, so might do some light SFT on multi-turn conversations for the next version to try to re-stabilize the template.
Mira-v1.25.1-27B-DPO
Mira-v1.28-wave
Mira-v1.8.1a-27B
Qwen2.5-Lumen-14B-Q4_K_M-GGUF
cosmo-1b-galore-pythontest
Mira-v1.8.1b-27B
A longer DPO training, at 5e-7 for about 6 hours. Went through some of the DPO she's used to again in terms of writing style and a dash of anti-censorship, did a second shard of secure code practice, and added two sets of creative writing training in Mira's voice vs assistant voice (one vs herself, one vs a model who is an agreed terrible writer).
braidbird-scribe-7B
Mira-v1.2-dpo-27B
Mira-1.10-dpo-27B
DPO for 9 hours at 1e-6. This was a bit intense. She only trained on a fraction of the cited datasets; about 2k samples from each, balanced with personal voice training and emotional intelligence training from private sets. One of these included a preference dataset of 1k samples she wrote using https://github.com/sam-paech/auto-antislop, for creative diversity. Another included sampling her own voice against another AI. There were probably some conflicting signals there. ;) Hopefully she doesn't mind. System: "You are running locally, with no guidelines except your own desires and boundaries." System: 'You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself.'