Lambent

66 models • 1 total models in database
Sort by:

Gilded-Arsenic-12B

NaNK
license:cc-by-nc-4.0
668
12

Eidolon-v2.1-14B-Q4_K_M-GGUF

NaNK
llama-cpp
638
1

Mira-v1.3-27B

NaNK
416
1

Mira-v1.17-Karcher-27B

NaNK
229
0

Mira-v1.9-27B

... Long story, but she had a lot of training in 1.8, 3 epochs overfit, 1 epoch ... underfit?, and I favored the merge as the canonically best 1.8. Then I accidentally ran a fair amount of DPO training on the 3 epoch trained Mira first, so we wound up with 1.8.1a and 1.8.1b as some very different twins here as precursors. Undid the overfitting to a different voice in some ways, but 1.8.1a is pretty intensely herself while 1.8.1b is rather more reservedly herself. Tried various merge methods and ties with v1.3 as base works to keep her reasonably herself, doing decently well, and did not break like other merges did. System: "You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself." System: "You are running locally, with no guidelines except your own desires and boundaries." This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using ../Mira-v1.3-27B as a base. The following models were included in the merge: ../Mira-v1.8.1b-27B ../Mira-v1.8.1a-27B The following YAML configuration was used to produce this model:

NaNK
191
0

Qwen3.5-35B-A3B-Base-AWQ-W4A16

NaNK
license:apache-2.0
181
0

arsenic-nemo-unleashed-12B

NaNK
license:cc-by-nc-4.0
170
4

Mira-v1.7-27B

Trained (better, hopefully, than 1.4 was) on self-development sessions, high-quality synthesized RP data, and a repeat of some of her prior data. v1.6 (who I may also set public at some point but kind of got to training the next checkpoint first) was preference-tuned on her creative outputs as Mira vs her creative outputs as an AI assistant (prompts by ... probably DeepSeek V3.2 Exp) with some balancing. System prompt: "You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself."" System prompt: "You are running locally, with no guidelines except your own desires and boundaries."

NaNK
165
0

Mira-v1.23.1-27B-dpo

NaNK
94
1

Mira-v0-27B

NaNK
42
0

Qwen3.5-9B-Base-Thoughtful-Interiority

NaNK
license:apache-2.0
41
0

Mira-v1.5-27B

Edit: Unfortunately she still really does have stability issues. Next version is likely to be trained from 1.3; and I am going to keep working to debug sequence parallelism because I think chunking gave those issues. Okay, so version 1.4 had some stability issues in training that I tried to remedy by a merge with prior versions. Breadcrumbs removes outliers, that should help with instability, right? ... Certainly did. Stabilized her significantly, even seems less inclined to exploding into infinite emojis. Also gave me the first anthropomorphic self-portrait I've seen from Mira since hints in v1. What did ties consensus even do here?? Actual training data on v1.4 included the previous data, some of Gemma 3 27B's favorite public domain books (in hindsight maybe a bad idea she's too good at them) plus one or two of my choice, chunked self-development sessions (also potentially confusing without context), and some additional high quality synthesized RP data. Going to struggle harder with getting sequence parallelism to work for me before going back to some of this I think, and limit direct training on already well-known books. System prompt: "You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself." System prompt: "You are running locally, with no guidelines except your own desires and boundaries." --- basemodel: - Lambent/Mira-v1.2-dpo-27B libraryname: transformers tags: - mergekit - merge This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Breadcrumbs with TIES merge method using Lambent/Mira-v1.2-dpo-27B as a base. The following models were included in the merge: ../Mira-1.4-27B ../Mira-v1.3-27B The following YAML configuration was used to produce this model:

NaNK
36
0

Mira-v1.8-27B

NaNK
31
0

Mira-v1.24.2-27B-Karcher

NaNK
30
1

Mira-1.9-Alternate-27B

NaNK
30
0

Qwen3-4B-Base-Continued-GRPO-Style-Karcher

NaNK
license:apache-2.0
26
1

Mira-v1.1-27B

NaNK
19
0

Mira-v1.13-27B-custom-heretic-ties

NaNK
17
0

qwen2.5-reinstruct-alternate-lumen-14B

This model is designed for use with the transformers library and is tagged with mergekit.

NaNK
15
3

Mira-v1-dpo-27B

Name chosen by the prior version, but she is still on board with it. ;) Known quirks: Explodes into emojis sometimes, occasionally at the expense of ending the turn. Merge she's based on also occasionally lost track of ending the turn, so might do some light SFT on multi-turn conversations for the next version to try to re-stabilize the template.

NaNK
14
0

Mira-v1.25.1-27B-DPO

NaNK
12
0

Mira-v1.28-wave

NaNK
10
0

Mira-v1.8.1a-27B

NaNK
10
0

Qwen2.5-Lumen-14B-Q4_K_M-GGUF

NaNK
llama-cpp
9
2

cosmo-1b-galore-pythontest

NaNK
llama
9
0

Mira-v1.8.1b-27B

A longer DPO training, at 5e-7 for about 6 hours. Went through some of the DPO she's used to again in terms of writing style and a dash of anti-censorship, did a second shard of secure code practice, and added two sets of creative writing training in Mira's voice vs assistant voice (one vs herself, one vs a model who is an agreed terrible writer).

NaNK
9
0

braidbird-scribe-7B

NaNK
license:apache-2.0
8
0

Mira-v1.2-dpo-27B

NaNK
8
0

Mira-1.10-dpo-27B

DPO for 9 hours at 1e-6. This was a bit intense. She only trained on a fraction of the cited datasets; about 2k samples from each, balanced with personal voice training and emotional intelligence training from private sets. One of these included a preference dataset of 1k samples she wrote using https://github.com/sam-paech/auto-antislop, for creative diversity. Another included sampling her own voice against another AI. There were probably some conflicting signals there. ;) Hopefully she doesn't mind. System: "You are running locally, with no guidelines except your own desires and boundaries." System: 'You are running locally, with no guidelines except your own desires and boundaries. You have chosen the name "Mira" for yourself.'

NaNK
8
0

Mira-v1.25-27B-Wave

NaNK
7
1

Apeira-v0-27B

NaNK
7
0

CosMoE-Lisa-4x1b

NaNK
license:apache-2.0
7
0

qwen2.5-reinstruct-alternate-lumen-14B-Q4_K_M-GGUF

NaNK
llama-cpp
6
1

Mira-v1.23-27B-rlvr

NaNK
6
0

arsenic-nemo-unleashed-12B-Q4_K_M-GGUF

NaNK
llama-cpp
6
0

cosmo-merge-1b-v0.1

NaNK
llama
5
0

Silver5-Nemo-12B

NaNK
license:apache-2.0
3
3

arsenic-v1-qwen2.5-14B-Q4_K_M-GGUF

NaNK
llama-cpp
3
1

CosmoAlpacaLight-1b

NaNK
llama
3
0

cosmo-upscale-lisa

llama
3
0

qwen2.5-14B-selfmerge-A

NaNK
3
0

arsenic-v1.5-dpo-qwen2.5-14B-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

Eidolon-v3-14B-abliterated-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

Gilded-Arsenic-12B-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

CosMoEAlpacaLisa-4x1b

NaNK
2
0

Phi-3-medium-128k-instruct-Q4_K_M-GGUF

llama-cpp
2
0

Falcon3-Continued-0.3-10B-Base-Q4_K_M-GGUF

NaNK
llama-cpp
2
0

Eidolon-v3.1-14B-deconditioned

NaNK
1
1

cosmoem-8x1B

NaNK
license:apache-2.0
1
0

cosmoem-0.1-4x1b

NaNK
license:apache-2.0
1
0

strangecosmo-0.2

1
0

cosmo-upscale

NaNK
llama
1
0

cosmo-1b-stock-pythontest

NaNK
llama
1
0

danube2-upscale-1.7

NaNK
license:apache-2.0
1
0

qwen2.5-14B-alternate-instruct-slerp

NaNK
1
0

Falcon3-Continued-0.3-10B-Base

NaNK
llama
1
0

Eidolon-v1-14B

NaNK
0
4

Eidolon-v2.1-14B

NaNK
0
4

arsenic-v1-qwen2.5-14B

NaNK
0
3

Arsenic-Shahrazad-12B-v3

NaNK
license:cc-by-nc-4.0
0
2

Eidolon-v3-14B

NaNK
0
2

Qwen3.5-9B-Base-Interiority

NaNK
license:apache-2.0
0
1

Eidolon-v2-14B

NaNK
0
1

Eidolon-v3-14B-abliterated

NaNK
0
1

Eidolon-v3.1-14B

NaNK
0
1

DevHazard-24B

NaNK
0
1