nbeerbower

170 models • 12 total models in database
Sort by:

Huihui-Qwen3.5-4B-abliterated-Athanorlite-ORPO

NaNK
493
0

Huihui-Qwen3.5-9B-abliterated-Grimoire-DPO

NaNK
215
0

Huihui-Qwen3.5-9B-abliterated-Grimoire-KTO

NaNK
202
0

Huihui-Qwen3.5-27B-abliterated-Athanorlite-ORPO-v2

NaNK
193
0

Xiaolong-Qwen3-0.6B

NaNK
license:apache-2.0
135
2

Mahou-1.5-mistral-nemo-12B-lorablated-GGUF

NaNK
license:apache-2.0
108
1

Schreiber-mistral-nemo-12B

nbeerbower/mistral-nemo-kartoffel-12B finetuned on: jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo nbeerbower/synthetic-fiction-dpo nbeerbower/Arkhaios-DPO nbeerbower/Purpura-DPO nbeerbower/Schule-DPO

NaNK
license:apache-2.0
49
3

Vitus Qwen3 14B

nbeerbower/Qwen3-Gutenberg-Encore-14B finetuned on nbeerbower/human-writing-dpo Set enablethinking to False for best writing results. Using OpenAI o3 as a judge on the following prompt: | category | score | rationale | | --------------------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | narrative quality | 9 | pacing is confident, scene-to-scene flow is seamless. strong structure: setup → rising dread → emotional turn → intimate reveal. only deduction is the lack of external resolution—ends just before action concludes. | | prose style | 9 | lush, lyrical, with high emotional density. great rhythm and sentence balance. occasional near-overwrought line (“no one would forget the sound of love”) could be pared back slightly, but overall deeply evocative. | | thematic depth | 9 | memory, grief, and duty interweave elegantly. the wife’s identity as the proto-archivist adds mythic weight. the twist of her “saving something for him” opens an emotional loop that begs continuation. | | prompt relevance | 10 | crystal reels, subterranean archive, apocalyptic silent storm, heartbeat mention, treasured memory-sound, archivist lore—nailed every core concept with gravitas. | | speculative imagination | 9 | the storm-as-absence is familiar now but still potent here; the framing of the archive as an emotional crypt adds a layer of metaphysical horror. naming the storm would have been a nice flourish. |

NaNK
license:apache-2.0
31
8

Xiaolong-Qwen3-4B

NaNK
license:apache-2.0
28
1

Llama-3.1-Saoirse-70B

NaNK
llama
27
2

Xiaolong-Qwen3-1.7B

NaNK
license:apache-2.0
26
2

Maidphin-Kunoichi-7B-GGUF-Q4_K_M

NaNK
license:cc-by-nc-4.0
24
3

Huihui-Qwen3.5-9B-abliterated-Grimoire-SimPO

NaNK
22
0

Xiaolong-Qwen3-8B

Xiaolong is a small, uncensored, reasoning-focused model finetuned using ORPO and QLoRA on top of Qwen3-8B-abliterated-TIES. - Method: ORPO - Epochs: 2 - Learning Rate: 5e-6, cosine decay w/ 5% warmup - Batch Size: 1 x 32 (32 effective) - Max Grad Norm: 0.3 - LoRA Rank: 64 - Hardware: 1x NVIDIA RTX A6000 ~9,100 samples. 3,000 used Chain of Thought reasoning. nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo GeneralReasoning/GeneralThought-430K (1000 samples) nvidia/OpenMathReasoning (1000 samples) nvidia/OpenCodeReasoning (1000 samples)

NaNK
license:apache-2.0
18
4

llama-3-wissenschaft-8B-v2

NaNK
llama
16
1

llama-3-bophades-v3-8B

NaNK
llama
15
3

Llama3-Kartoria-70B-TEST

NaNK
llama
15
0

Hemlock-Qwen2.5-Coder-32B

NaNK
14
1

Hemlock-Qwen3-Coder-REAP-25B-A3B-LORA

NaNK
license:apache-2.0
14
0

flammen3-GGUF-Q4_K_M

NaNK
license:apache-2.0
13
0

Mahou-Gutenberg-Nemo-12B

NaNK
11
1

Qwen3-14B-abliterated-TIES

NaNK
license:apache-2.0
11
1

Wenyan-Qwen3-8B

An attempt to build a Xiaolong-like tune with more Gutenberg data on top of lemon07r/Qwen3-R1-SLERP-Q3T-8B. I haven't done much testing but the model will sometimes skip thinking. The second epoch may have overcooked it.

NaNK
license:apache-2.0
11
1

Luna-A0-12B

NaNK
11
0

flammen-GGUF-Q4_K_M

NaNK
license:apache-2.0
11
0

Helium1-2B-Grimoire-ORPO

NaNK
llama
10
0

flammen9X-mistral-7B-GGUF-Q4_K_M

NaNK
license:apache-2.0
10
0

llama-3-spicy-abliterated-stella-8B

NaNK
llama
9
4

flammen3X-GGUF-Q4_K_M

NaNK
license:cc-by-nc-4.0
9
0

flammen4-mistral-7B-GGUF-Q4_K_M

NaNK
license:apache-2.0
9
0

Dumpling-Qwen2.5-32B

NaNK
license:apache-2.0
8
11

Xiaolong-Qwen3-14B

NaNK
license:apache-2.0
8
9

Dumpling-Qwen2.5-14B

nbeerbower/EVA-abliterated-TIES-Qwen2.5-14B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

NaNK
license:apache-2.0
8
4

Mistral-Nemo-Gutenberg-Vitus-12B

Mistral-Nemo-Gutenberg-Encore-12B finetuned on nbeerbower/human-writing-dpo with Mistral Instruct.

NaNK
license:apache-2.0
8
4

flammen8-mistral-7B-GGUF-Q4_K_M

NaNK
license:apache-2.0
8
0

Mistral-Nemo-Gutenberg-Encore-12B

NaNK
license:apache-2.0
7
11

UwU-Qwen2.5-32B

NaNK
7
6

llama3.1-gutenberg-8B

NaNK
llama
7
5

Merlina-ORPO-12B

NaNK
license:apache-2.0
7
0

DeepSeek-R1-Qwen-lorablated-32B

NaNK
license:apache-2.0
6
7

Qwen3-Gutenberg-Encore-14B

nbeerbower/Xiaolong-Qwen3-14B finetuned on: jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo nbeerbower/synthetic-fiction-dpo nbeerbower/Arkhaios-DPO nbeerbower/Purpura-DPO nbeerbower/Schule-DPO

NaNK
license:apache-2.0
6
6

Gemma2-Gutenberg-Doppel-9B

NaNK
6
5

Eloisa-Qwen3-8B

This is a re-run of Wenyan with more focus on Gutenberg data and only 1 epoch.

NaNK
license:apache-2.0
6
5

Vitus-mistral-nemo-12B

NaNK
license:apache-2.0
6
1

Huihui-Qwen3.5-27B-abliterated-Athanorlite-ORPO

NaNK
6
0

Hemlock-Qwen3-Coder-REAP-25B-A3B

NaNK
6
0

flammen7-mistral-7B-GGUF-Q4_K_M

NaNK
license:apache-2.0
6
0

mistral-nemo-gutenberg3-12B

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on jondurbin/gutenberg-dpo-v0.1, nbeerbower/gutenberg2-dpo, and nbeerbower/gutenberg-moderne-dpo.

NaNK
license:apache-2.0
5
6

llama-3-bophades-v2-8B

NaNK
llama
5
3

QwQ-R1-abliterated-TIES-Qwen2.5-32B

NaNK
5
2

Zhiming-Qwen3-32B-lora

NaNK
license:apache-2.0
5
0

NikuXL-v0.1

5
0

Gutensuppe-mistral-nemo-12B

NaNK
4
6

Dumpling-Qwen2.5-VL-7B

NaNK
4
4

CaptainNemo-ChatML-12B

NaNK
license:apache-2.0
4
2

Yanfei-v2-Qwen3-32B

A repair of Yanfei-Qwen-32B by TIES merging huihui-ai/Qwen3-32B-abliterated, Zhiming-Qwen3-32B, and Menghua-Qwen3-32B using mergekit. This model was made possible with compute support from Nectar AI. Thank you! ❤️ The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
4
2

llama-3-sauce-v1-8B

NaNK
llama
4
1

Dumpling-Qwen2.5-1.5B

nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

NaNK
license:apache-2.0
4
1

EVA-abliterated-TIES-Qwen2.5-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
4
1

Qwen3-8B-abliterated-TIES

NaNK
license:apache-2.0
4
1

Yanfei-Qwen3-32B

> ⚠️ Warning: Bad Cook > > This model exhibits degraded/broken reasoning and poor performance across general tasks. huihui-ai/Qwen3-32B-abliterated finetuned on a mix of datasets. This model was trained with compute support from Nectar AI, using 4x H100s. Their sponsorship made this release possible.

NaNK
license:apache-2.0
4
1

bruphin-epsilon-GGUF-q4_0

NaNK
4
0

Dumpling-Qwen2.5-7B-1k-r64-2e-5

> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. > Learning Rate was also increased to 2e-5 from 8e-6 > > Find v1 here. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

NaNK
license:apache-2.0
4
0

llama-3-stinky-v2-8B

NaNK
llama
3
5

llama-3-wissenschaft-8B

NaNK
llama
3
4

Dumpling-Qwen2.5-7B-1k-r16

> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

NaNK
license:apache-2.0
3
2

EVA-abliterated-Qwen2.5-7B

NaNK
3
2

Dumpling-Qwen2.5-1.5B-v2

NaNK
license:apache-2.0
3
2

Dumpling-Qwen2.5-32B-v2

NaNK
license:apache-2.0
3
2

bruphin-epsilon

NaNK
license:apache-2.0
3
1

EVA-abliterated-TIES-Qwen2.5-72B

NaNK
license:apache-2.0
3
1

bophades-mistral-math-DPO-7B

NaNK
license:apache-2.0
3
0

Menghua-Qwen3-32B-lora

An attempt to improve prose and creative writing for Yanfei.

NaNK
license:apache-2.0
3
0

phi3.5-gutenberg-4B

NaNK
license:mit
2
4

llama-3-stinky-8B

NaNK
llama
2
3

Stella-mistral-nemo-12B

NaNK
2
2

bruphin-kappa

license:apache-2.0
2
1

llama-3-sauce-v2-8B

NaNK
llama
2
1

Mahou-1.3-mistral-nemo-12B-chatml

NaNK
2
1

Llama3-Sapientia-70B

NaNK
llama
2
1

Qwen3-4B-abliterated-TIES

NaNK
license:apache-2.0
2
1

SuperBruphin-3x7B

NaNK
license:apache-2.0
2
0

bruphin-zeta

NaNK
license:apache-2.0
2
0

Bophades-BruinsMaid-7B

NaNK
license:apache-2.0
2
0

Suppe-v1-7B

NaNK
license:apache-2.0
2
0

Mistral-Nemo-Prism-12B-v5

> 🧪 Just Another Model Experiment > > This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready! Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO. The goal was to reduce archaic language and purple prose in a completely uncensored model. For this version, beta was increased to 0.5 and learning rate was increased to 8e-6 (the original in v1).

NaNK
license:apache-2.0
2
0

EVA-abliterated-TIES-Qwen2.5-1.5B

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/Qwen2.5-1.5B as a base. The following models were included in the merge: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
2
0

mistral-nemo-kartoffel-PRUNE3

NaNK
2
0

Azura-Qwen2.5-32B

NaNK
license:apache-2.0
2
0

Shiina-Qwen2.5-32B

NaNK
license:apache-2.0
2
0

Qwen3-1.7B-abliterated-TIES

NaNK
license:apache-2.0
2
0

Qwen3-0.6B-abliterated-TIES

NaNK
license:apache-2.0
2
0

llama-3-Daredevil-Mahou-8B

NaNK
llama
1
6

llama-3-dragon-bophades-8B

NaNK
llama
1
4

llama3.1-airoboros3.2-QDT-8B

NaNK
llama
1
4

llama-3-bible-dpo-8B

NaNK
llama
1
3

Llama3.1-Allades-8B

NaNK
llama
1
3

EVA-Rombos1-Qwen2.5-32B

NaNK
license:apache-2.0
1
3

BigKartoffel-mistral-nemo-20B

This is a merge of pre-trained language models created using mergekit. Inspired by mlabonne/BigQwen2.5-52B-Instruct and mlabonne/Meta-Llama-3-120B-Instruct. This model was merged using the Passthrough merge method. The following models were included in the merge: nbeerbower/mistral-nemo-kartoffel-12B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
1
3

llama-3-stella-8B

NaNK
llama
1
2

Mistral-Nemo-Prism-12B-v6

NaNK
license:apache-2.0
1
2

Gigaberg-Mistral-Large-123B

NaNK
1
2

bruphin-iota

license:apache-2.0
1
1

Flammen-Bruphin

NaNK
license:apache-2.0
1
1

flammen16-chinese-dpo-mistral-7B

NaNK
license:apache-2.0
1
1

llama-3-aura-bophades-8B

NaNK
llama
1
1

llama-3-slerp-kraut-dragon-8B

NaNK
llama
1
1

llama-3-dragonmaid-8B-v2

NaNK
llama
1
1

llama-3-spicy-8B

NaNK
llama
1
1

HolyYi-9B

NaNK
llama
1
1

yi-prude-9B

NaNK
llama
1
1

Flammen-Mahou-mistral-7B-v2

NaNK
1
1

Mahou-1.3-mistral-nemo-12B-TEST

NaNK
1
1

Rombos-EVAGutenberg-TIES-Qwen2.5-32B

NaNK
license:apache-2.0
1
1

Mahou-1.5-Qwen2.5-1.5B-E2

NaNK
license:apache-2.0
1
1

DoublePotato-Mistral-Nemo-13B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method. The following models were included in the merge: nbeerbower/mistral-nemo-kartoffel-12B The following YAML configuration was used to produce this model:

NaNK
1
1

Dumpling-Qwen2.5-72B

NaNK
license:apache-2.0
1
1

Llama3-Asobi-70B

NaNK
llama
1
1

HumanLlama-3.2-1B

NaNK
llama
1
1

bruphin-alpha

NaNK
license:apache-2.0
1
0

Flammen-Kunoichi-7B

NaNK
license:cc-by-nc-4.0
1
0

bruphin-lambda

license:apache-2.0
1
0

flammen17-py-DPO-v1-7B

NaNK
license:apache-2.0
1
0

SuperFlammen-4x7B

NaNK
license:apache-2.0
1
0

Mistral-Nemo-Prism-12B-v3

NaNK
license:apache-2.0
1
0

Mistral-Nemo-Prism-12B-v4

NaNK
license:apache-2.0
1
0

Dumpling-Qwen2.5-VL-3B

NaNK
1
0

llama-3-Stheno-Mahou-8B

NaNK
llama
0
15

Qwen2.5-Gutenberg-Doppel-32B

NaNK
license:apache-2.0
0
6

EVA-Gutenberg3-Qwen2.5-32B

NaNK
license:apache-2.0
0
6

Denker-mistral-nemo-12B

NaNK
license:apache-2.0
0
4

HeroBophades-3x7B

NaNK
license:apache-2.0
0
3

llama-3-bophades-v1-8B

NaNK
llama
0
3

KawaiiMahou-llama3-8B

NaNK
llama
0
3

Mistral-Nemo-12B-abliterated-LORA

NaNK
license:apache-2.0
0
3

Maidphin-Kunoichi-7B

NaNK
license:cc-by-nc-4.0
0
2

llama3-KawaiiMahouSauce-8B

NaNK
llama
0
2

yi-gutenberg-9B

NaNK
llama
0
2

yi-wissenschaft-9B

NaNK
llama
0
2

Mahou-1.3-M1-mistral-7B

NaNK
license:apache-2.0
0
2

HolyNemo-12B

NaNK
license:apache-2.0
0
2

Qwen2.5-32B-abliterated-LORA

NaNK
license:apache-2.0
0
2

Rombos-Qwen2.5-32B-lorablated

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b + nbeerbower/Qwen2.5-32B-abliterated-LORA as a base. The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
0
2

Huihui-Qwen3.5-9B-abliterated-Grimoire-SFT

NaNK
0
1

Qwen3.5-9B-Writing-DPO

NaNK
0
1

Flammen-Trismegistus-7B

NaNK
license:apache-2.0
0
1

strange_3236-7B

NaNK
license:apache-2.0
0
1

Transcendental-Maid-7B

NaNK
license:apache-2.0
0
1

Bruphin-Mika-7B

NaNK
license:apache-2.0
0
1

InfinityFlammenNoodleRP-7b

NaNK
license:apache-2.0
0
1

MaidFlameSoup-7B

NaNK
license:apache-2.0
0
1

bophades-mistral-truthy-DPO-7B

NaNK
license:apache-2.0
0
1

slerp-bophades-truthy-math-mistral-7B

NaNK
license:apache-2.0
0
1

HeroBophades-2x7B

NaNK
license:apache-2.0
0
1

llama-3-slerp-dolphin-sauce-8B

NaNK
llama
0
1

llama-3-dragonmaid-8B

NaNK
llama
0
1

llama-3-stella-truthy-dpo-8B

NaNK
llama
0
1

KawaiiMahou-mistral-7B

NaNK
license:apache-2.0
0
1

Yiet-9B

NaNK
llama
0
1

Flammen-Mahou-mistral-7B

NaNK
license:apache-2.0
0
1

Mahou-mistral-slerp-7B

NaNK
0
1

Mahou-1.3-M2-mistral-7B

NaNK
0
1

Mahou-1.3-mistral-nemo-12B-TEST2

NaNK
0
1

Mahou-1.3-mistral-nemo-12B-r64

NaNK
0
1

EVA-Gutenberg-Rombos-slerp-Qwen2.5-32B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: nbeerbower/EVA-Gutenberg3-Qwen2.5-32B nbeerbower/Rombos-Qwen2.5-32B-lorablated The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
0
1

Dumpling-Qwen2.5-7B-1k-r32

NaNK
license:apache-2.0
0
1

Dumpling-Qwen2.5-7B-1k-r256

> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

NaNK
license:apache-2.0
0
1

Dumpling-Qwen2.5-7B-1k-r32-2e-5

> 🧪 Part of an Experiment > > This model is meant to investigate the effects of changing LoRA rank on the same tune. > Learning Rate was also increased to 2e-5 from 8e-6 > > Find v1 here. nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO (1,000 samples) jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

NaNK
license:apache-2.0
0
1

Mahou-1.5-Qwen2.5-1.5B-E4

NaNK
license:apache-2.0
0
1

FIXBODYr128-QwenImageEdit2509

An imperfect LoRA for Qwen-Image-Edit focused on correcting hands and anatomy issues in anime-style illustrations while preserving the original art style. Same settings as FIXBODYr128, but for Qwen-Image-Edit-2509 Triggered by `fix [her/his/their] hands/legs` (the pronoun can be omitted, and it works best focusing on one body part at a time)

NaNK
license:apache-2.0
0
1