nbeerbower
Huihui-Qwen3.5-4B-abliterated-Athanorlite-ORPO
Huihui-Qwen3.5-9B-abliterated-Grimoire-DPO
Huihui-Qwen3.5-9B-abliterated-Grimoire-KTO
Huihui-Qwen3.5-27B-abliterated-Athanorlite-ORPO-v2
Xiaolong-Qwen3-0.6B
Mahou-1.5-mistral-nemo-12B-lorablated-GGUF
Schreiber-mistral-nemo-12B
nbeerbower/mistral-nemo-kartoffel-12B finetuned on: jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo nbeerbower/synthetic-fiction-dpo nbeerbower/Arkhaios-DPO nbeerbower/Purpura-DPO nbeerbower/Schule-DPO
Vitus Qwen3 14B
nbeerbower/Qwen3-Gutenberg-Encore-14B finetuned on nbeerbower/human-writing-dpo Set enablethinking to False for best writing results. Using OpenAI o3 as a judge on the following prompt: | category | score | rationale | | --------------------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | narrative quality | 9 | pacing is confident, scene-to-scene flow is seamless. strong structure: setup → rising dread → emotional turn → intimate reveal. only deduction is the lack of external resolution—ends just before action concludes. | | prose style | 9 | lush, lyrical, with high emotional density. great rhythm and sentence balance. occasional near-overwrought line (“no one would forget the sound of love”) could be pared back slightly, but overall deeply evocative. | | thematic depth | 9 | memory, grief, and duty interweave elegantly. the wife’s identity as the proto-archivist adds mythic weight. the twist of her “saving something for him” opens an emotional loop that begs continuation. | | prompt relevance | 10 | crystal reels, subterranean archive, apocalyptic silent storm, heartbeat mention, treasured memory-sound, archivist lore—nailed every core concept with gravitas. | | speculative imagination | 9 | the storm-as-absence is familiar now but still potent here; the framing of the archive as an emotional crypt adds a layer of metaphysical horror. naming the storm would have been a nice flourish. |
Xiaolong-Qwen3-4B
Llama-3.1-Saoirse-70B
Xiaolong-Qwen3-1.7B
Maidphin-Kunoichi-7B-GGUF-Q4_K_M
Huihui-Qwen3.5-9B-abliterated-Grimoire-SimPO
Xiaolong-Qwen3-8B
Xiaolong is a small, uncensored, reasoning-focused model finetuned using ORPO and QLoRA on top of Qwen3-8B-abliterated-TIES. - Method: ORPO - Epochs: 2 - Learning Rate: 5e-6, cosine decay w/ 5% warmup - Batch Size: 1 x 32 (32 effective) - Max Grad Norm: 0.3 - LoRA Rank: 64 - Hardware: 1x NVIDIA RTX A6000 ~9,100 samples. 3,000 used Chain of Thought reasoning. nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo GeneralReasoning/GeneralThought-430K (1000 samples) nvidia/OpenMathReasoning (1000 samples) nvidia/OpenCodeReasoning (1000 samples)
llama-3-wissenschaft-8B-v2
llama-3-bophades-v3-8B
Llama3-Kartoria-70B-TEST
Hemlock-Qwen2.5-Coder-32B
Hemlock-Qwen3-Coder-REAP-25B-A3B-LORA
flammen3-GGUF-Q4_K_M
Mahou-Gutenberg-Nemo-12B
Qwen3-14B-abliterated-TIES
Wenyan-Qwen3-8B
An attempt to build a Xiaolong-like tune with more Gutenberg data on top of lemon07r/Qwen3-R1-SLERP-Q3T-8B. I haven't done much testing but the model will sometimes skip thinking. The second epoch may have overcooked it.
Luna-A0-12B
flammen-GGUF-Q4_K_M
Helium1-2B-Grimoire-ORPO
flammen9X-mistral-7B-GGUF-Q4_K_M
llama-3-spicy-abliterated-stella-8B
flammen3X-GGUF-Q4_K_M
flammen4-mistral-7B-GGUF-Q4_K_M
Dumpling-Qwen2.5-32B
Xiaolong-Qwen3-14B
Dumpling-Qwen2.5-14B
nbeerbower/EVA-abliterated-TIES-Qwen2.5-14B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.
Mistral-Nemo-Gutenberg-Vitus-12B
Mistral-Nemo-Gutenberg-Encore-12B finetuned on nbeerbower/human-writing-dpo with Mistral Instruct.
flammen8-mistral-7B-GGUF-Q4_K_M
Mistral-Nemo-Gutenberg-Encore-12B
UwU-Qwen2.5-32B
llama3.1-gutenberg-8B
Merlina-ORPO-12B
DeepSeek-R1-Qwen-lorablated-32B
Qwen3-Gutenberg-Encore-14B
nbeerbower/Xiaolong-Qwen3-14B finetuned on: jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo nbeerbower/synthetic-fiction-dpo nbeerbower/Arkhaios-DPO nbeerbower/Purpura-DPO nbeerbower/Schule-DPO
Gemma2-Gutenberg-Doppel-9B
Eloisa-Qwen3-8B
This is a re-run of Wenyan with more focus on Gutenberg data and only 1 epoch.
Vitus-mistral-nemo-12B
Huihui-Qwen3.5-27B-abliterated-Athanorlite-ORPO
Hemlock-Qwen3-Coder-REAP-25B-A3B
flammen7-mistral-7B-GGUF-Q4_K_M
mistral-nemo-gutenberg3-12B
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on jondurbin/gutenberg-dpo-v0.1, nbeerbower/gutenberg2-dpo, and nbeerbower/gutenberg-moderne-dpo.
llama-3-bophades-v2-8B
QwQ-R1-abliterated-TIES-Qwen2.5-32B
Zhiming-Qwen3-32B-lora
NikuXL-v0.1
Gutensuppe-mistral-nemo-12B
Dumpling-Qwen2.5-VL-7B
CaptainNemo-ChatML-12B
Yanfei-v2-Qwen3-32B
A repair of Yanfei-Qwen-32B by TIES merging huihui-ai/Qwen3-32B-abliterated, Zhiming-Qwen3-32B, and Menghua-Qwen3-32B using mergekit. This model was made possible with compute support from Nectar AI. Thank you! ❤️ The following YAML configuration was used to produce this model:
llama-3-sauce-v1-8B
Dumpling-Qwen2.5-1.5B
nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.
EVA-abliterated-TIES-Qwen2.5-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 The following YAML configuration was used to produce this model:
Qwen3-8B-abliterated-TIES
Yanfei-Qwen3-32B
> ⚠️ Warning: Bad Cook > > This model exhibits degraded/broken reasoning and poor performance across general tasks. huihui-ai/Qwen3-32B-abliterated finetuned on a mix of datasets. This model was trained with compute support from Nectar AI, using 4x H100s. Their sponsorship made this release possible.
bruphin-epsilon-GGUF-q4_0
Dumpling-Qwen2.5-7B-1k-r64-2e-5
> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. > Learning Rate was also increased to 2e-5 from 8e-6 > > Find v1 here. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.
llama-3-stinky-v2-8B
llama-3-wissenschaft-8B
Dumpling-Qwen2.5-7B-1k-r16
> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.
EVA-abliterated-Qwen2.5-7B
Dumpling-Qwen2.5-1.5B-v2
Dumpling-Qwen2.5-32B-v2
bruphin-epsilon
EVA-abliterated-TIES-Qwen2.5-72B
bophades-mistral-math-DPO-7B
Menghua-Qwen3-32B-lora
An attempt to improve prose and creative writing for Yanfei.
phi3.5-gutenberg-4B
llama-3-stinky-8B
Stella-mistral-nemo-12B
bruphin-kappa
llama-3-sauce-v2-8B
Mahou-1.3-mistral-nemo-12B-chatml
Llama3-Sapientia-70B
Qwen3-4B-abliterated-TIES
SuperBruphin-3x7B
bruphin-zeta
Bophades-BruinsMaid-7B
Suppe-v1-7B
Mistral-Nemo-Prism-12B-v5
> 🧪 Just Another Model Experiment > > This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready! Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO. The goal was to reduce archaic language and purple prose in a completely uncensored model. For this version, beta was increased to 0.5 and learning rate was increased to 8e-6 (the original in v1).
EVA-abliterated-TIES-Qwen2.5-1.5B
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-1.5B as a base. The following models were included in the merge: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0 The following YAML configuration was used to produce this model:
mistral-nemo-kartoffel-PRUNE3
Azura-Qwen2.5-32B
Shiina-Qwen2.5-32B
Qwen3-1.7B-abliterated-TIES
Qwen3-0.6B-abliterated-TIES
llama-3-Daredevil-Mahou-8B
llama-3-dragon-bophades-8B
llama3.1-airoboros3.2-QDT-8B
llama-3-bible-dpo-8B
Llama3.1-Allades-8B
EVA-Rombos1-Qwen2.5-32B
BigKartoffel-mistral-nemo-20B
This is a merge of pre-trained language models created using mergekit. Inspired by mlabonne/BigQwen2.5-52B-Instruct and mlabonne/Meta-Llama-3-120B-Instruct. This model was merged using the Passthrough merge method. The following models were included in the merge: nbeerbower/mistral-nemo-kartoffel-12B The following YAML configuration was used to produce this model:
llama-3-stella-8B
Mistral-Nemo-Prism-12B-v6
Gigaberg-Mistral-Large-123B
bruphin-iota
Flammen-Bruphin
flammen16-chinese-dpo-mistral-7B
llama-3-aura-bophades-8B
llama-3-slerp-kraut-dragon-8B
llama-3-dragonmaid-8B-v2
llama-3-spicy-8B
HolyYi-9B
yi-prude-9B
Flammen-Mahou-mistral-7B-v2
Mahou-1.3-mistral-nemo-12B-TEST
Rombos-EVAGutenberg-TIES-Qwen2.5-32B
Mahou-1.5-Qwen2.5-1.5B-E2
DoublePotato-Mistral-Nemo-13B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: nbeerbower/mistral-nemo-kartoffel-12B The following YAML configuration was used to produce this model:
Dumpling-Qwen2.5-72B
Llama3-Asobi-70B
HumanLlama-3.2-1B
bruphin-alpha
Flammen-Kunoichi-7B
bruphin-lambda
flammen17-py-DPO-v1-7B
SuperFlammen-4x7B
Mistral-Nemo-Prism-12B-v3
Mistral-Nemo-Prism-12B-v4
Dumpling-Qwen2.5-VL-3B
llama-3-Stheno-Mahou-8B
Qwen2.5-Gutenberg-Doppel-32B
EVA-Gutenberg3-Qwen2.5-32B
Denker-mistral-nemo-12B
HeroBophades-3x7B
llama-3-bophades-v1-8B
KawaiiMahou-llama3-8B
Mistral-Nemo-12B-abliterated-LORA
Maidphin-Kunoichi-7B
llama3-KawaiiMahouSauce-8B
yi-gutenberg-9B
yi-wissenschaft-9B
Mahou-1.3-M1-mistral-7B
HolyNemo-12B
Qwen2.5-32B-abliterated-LORA
Rombos-Qwen2.5-32B-lorablated
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b + nbeerbower/Qwen2.5-32B-abliterated-LORA as a base. The following YAML configuration was used to produce this model:
Huihui-Qwen3.5-9B-abliterated-Grimoire-SFT
Qwen3.5-9B-Writing-DPO
Flammen-Trismegistus-7B
strange_3236-7B
Transcendental-Maid-7B
Bruphin-Mika-7B
InfinityFlammenNoodleRP-7b
MaidFlameSoup-7B
bophades-mistral-truthy-DPO-7B
slerp-bophades-truthy-math-mistral-7B
HeroBophades-2x7B
llama-3-slerp-dolphin-sauce-8B
llama-3-dragonmaid-8B
llama-3-stella-truthy-dpo-8B
KawaiiMahou-mistral-7B
Yiet-9B
Flammen-Mahou-mistral-7B
Mahou-mistral-slerp-7B
Mahou-1.3-M2-mistral-7B
Mahou-1.3-mistral-nemo-12B-TEST2
Mahou-1.3-mistral-nemo-12B-r64
EVA-Gutenberg-Rombos-slerp-Qwen2.5-32B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: nbeerbower/EVA-Gutenberg3-Qwen2.5-32B nbeerbower/Rombos-Qwen2.5-32B-lorablated The following YAML configuration was used to produce this model:
Dumpling-Qwen2.5-7B-1k-r32
Dumpling-Qwen2.5-7B-1k-r256
> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.
Dumpling-Qwen2.5-7B-1k-r32-2e-5
> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. > Learning Rate was also increased to 2e-5 from 8e-6 > > Find v1 here. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.
Mahou-1.5-Qwen2.5-1.5B-E4
FIXBODYr128-QwenImageEdit2509
An imperfect LoRA for Qwen-Image-Edit focused on correcting hands and anatomy issues in anime-style illustrations while preserving the original art style. Same settings as FIXBODYr128, but for Qwen-Image-Edit-2509 Triggered by `fix [her/his/their] hands/legs` (the pronoun can be omitted, and it works best focusing on one body part at a time)