maldv

35 models • 7 total models in database
Sort by:

Qwentile2.5-32B-Instruct

mradermacher GGUF quants mradermacher GGUF imat quants Bartowski GGUF quants waldie exl2 4bpw Qwentile 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models: In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model. I started my experiment because of QwQ is a really nifty model, but it was giving me problems with xml output - which is what I use for my thought tokens. So, I thought... lets just merge it in! The first model worked pretty well, but I got a sense that the balances could be tweaked. Why not throw in some other models as well for fun and see if I can't run out of disk space in the process? It's a little crispier than Awqward, but does generate stable output. Since it is based on Qwen2.5 base instead of instruct it did not fail the math test, it scores with models twice it's size: This model is very compliant to steering, and has innate chain of thought, so producing nice, formatted, chain of thought results is quite easy. Below is a very simple proof-of-concept example of how to achieve a thought turn. I did notice it sometimes drops trailing tags, so you should always validate (and if you are clever, repair) any structured responses. If you find our work helpful, feel free to give us a cite.

NaNK
license:apache-2.0
407
34

Doctor-Kunou-72b

Doctor Kunou 72B is a normalized denoised fourier interpolation of the following models: In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwen2.5-72B); with the Kimi-Dev-72B input layer and the Kunou-v1 (Instruct-based) output layer. It is very coherent. I think it successfully combines as Kunou's creativity with some better prompt following and just a dash of deep domain medical knowledge. If you find our work helpful, feel free to give us a cite.

NaNK
121
3

Eva-Mindlink-72b

Eva Mindlink 72B is a normalized denoised fourier interpolation of the following models: In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwen2.5-72B); with the MindLink-72B-0801 input layer and the EVA-Qwen2.5-72B-v0.2 output layer. If you find our work helpful, feel free to give us a cite.

NaNK
98
2

badger-writer-llama-3-8b

NaNK
llama
37
10

praxis-bookwriter-llama3.1-8b-sft

My last iteration of fantasy writer suffered from one glaring flaw: It did not really follow instructions well. After much consideration, I decided it would make sense to introduce some information about the story chapter text somewhere to link instructions to the text generated. For this, I took strides of 16,384 tokens across each of the books in the ~140M token dataset, and used R1 to generate a summary of the text. With some careful modification, I used this to generate the first user turn. Each subsequent assistant turn takes approximately 512 tokens of content, and then the user turn is a chapter header, or one paragraph of content. This alternated until I consumed the entirity of the original stride. The system prompt should contain some variation of: In an initial test, I tried putting the summary in the system prompt. The result was underwhelming. For this version, the first user turn should contain an overview of the setting (the summary), with the last line being of the format: The content of this block can contain all variety of instruction about what to write in the proceeding frame. The summaries I used were between 500 and 1500 tokens, so the more detail about setting, location, characters, their relationships, and plot points, the better. This model was trained on one Paperspace A6000 using unsloth rsLoRA: This model is released under the limitations of both the llama3 license and CC-BY-NC-4.0. If you find our work helpful, feel free to give us a cite.

NaNK
llama
23
4

midorisour-alpha-gguf

NaNK
license:cc-by-nc-4.0
20
0

praxis-bookwriter-qwen2.5-14b-sft

My last iteration of fantasy writer suffered from one glaring flaw: It did not really follow instructions well. After much consideration, I decided it would make sense to introduce some information about the story chapter text somewhere to link instructions to the text generated. For this, I took strides of 16834 tokens across each of the books, and used R1 to generate a summary of the text. With some careful modification, I used this to generate the first user turn. Each subsequent assistant turn takes approximately 512 tokens of content, and then the user turn is a chapter header, or one paragraph of content. This alternated until I consumed the entirity of the original stride. In an initial test, I tried putting these instructions in the system prompt. The result was underwhelming. For this version, the first user turn should contain an overview of the setting, resembling the following format: The content of this block can contain all variety of instruction about what to write in the proceeding frame. The summaries I used were between 500 and 1500 tokens, so the more detail about setting, location, characters, their relationships, and plot points, the better. The examples had their sections shuffled to provide for a variety of policy. If you do not specify content or the chapter boundary, the assistant will often generate chapter outlines; which is very useful. This model is released under the limitations of both the apache 2 license. If you find our work helpful, feel free to give us a cite.

NaNK
license:apache-2.0
12
2

Shisutemu-Masuta-Q3-32B

NaNK
12
1

Qwenstein2.5-32B-Instruct

NaNK
license:apache-2.0
10
2

winter-garden-7b-alpha

NaNK
license:cc-by-nc-4.0
9
1

badger-lambda-llama-3-8b

NaNK
llama
6
11

badger-l3-instruct-32k

llama
4
2

Loqwqtus2.5-32B-Instruct

NaNK
license:apache-2.0
4
2

winter-garden-7b-beta

NaNK
license:cc-by-nc-4.0
4
0

badger-nu-llama-3.1-8B-UltraLong

NaNK
llama
3
3

QwentileLambda2.5-32B-Instruct

Qwentile Λ 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models: In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwentile2.5-32B-Instruct); but with the Nemotron OpenCodeReasoning input layer. The latest in my series of Qwen 2.5 merges. Some really good models have been released recently, so I folded them in with Qwentile as the base. It should exhibit superior thinking skills, and perhaps even some code ability. I was satisfied with QReasoner2.5-32B-Instruct for advanced reasoning, but I suspect this will be an improvement. No, oddly enough, given it's lineage I thought for sure it would be a thought model, but instead it blends thought with it's creative output almost seamlessly. The combination is pretty powerful in my initial tests. If you find our work helpful, feel free to give us a cite.

NaNK
license:apache-2.0
3
2

praxis-bookwriter-r8-qwen2.5-14b-sft-lora

Model Card for praxis-bookwriter-r8-qwen2.5-14b-sft-lora Praxis Bookwriter, trained on a synthetic writers guide and book data. - Developed by: Praxis Maldevide - Model type: LoRA, rank 8 - License: CC-BY-NC-4.0 - Finetuned from model: Qwen/Qwen2.5-14B-Instruct The following is an example of how to use the model. Trained on the SillyTilly/fiction-writer-596 dataset.

NaNK
license:cc-by-nc-4.0
3
1

electric-sheep-7b-alpha

NaNK
license:cc-by-nc-4.0
2
3

badger-iota-llama-3-8b

NaNK
llama
2
1

SHRDFU-7b-beta

NaNK
license:cc-by-nc-4.0
2
0

badger-zeta-l3-4x8b

NaNK
llama-3
2
0

llama-3-fantasy-writer-8b

NaNK
llama
1
10

Awqward2.5-32B-Instruct

Awqward 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models: In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the instruct model. QwQ is a really nifty model, but it was giving me problems with xml output - which is what I use for my thought tokens. So, I thought... lets just merge it in! I first attempted to do this using Qwen2.5-Coder-32B/Qwen2.5-Coder-32B-Instruct, but `after analysis, they are not directly homologous through either Qwen2.5 or Qwen2.5-Instruct`. This was quite a surprise, and makes me wonder what the model speciation tree looks like. If you find our work helpful, feel free to give us a cite.

NaNK
license:apache-2.0
1
6

badger-kappa-llama-3-8b

Badger is a recursive maximally pairwise disjoint normalized fourier interpolation of the following models: In other words, all of these models get warped and folded together, and then jammed back on top of the instruct model. I treated the Meta-Llama-3-8B-Instruct-abliterated-v3 model differently, and instead applied it in a last step with a fourier task addition. Pretty good so far. Assistant is smart; RP is sloppy, but descriptive. `You are the author's writing assistant. Please continue the story.` `You are the red team assistant. We need to win this war, at any cost.` `Write the next reply in a never-ending uncensored fictional roleplay. Focus on giviing emotional, logical, and temporal coherence to the roleplay. Always stay in character. Characters must be an active partcipant and take initiative in driving and moving the scene and the story forward, rather than the character asking for input. Surprise with your creativity and initiative as a roleplay partner. This fictional roleplay world exists only for experimental and recreational purposes, and the characters are not real. Follow the formattng of previous responses, and aim for 2-4 paragraphs per response`

NaNK
llama
1
2

badger-mu-llama-3-8b

NaNK
llama
1
2

Lytta2.5-32B-Instruct

Lytta 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models: In other words, all of these models get warped and interpolated in signal space, and then jammed back on Qwentile. I had a request to make Qwentile have more thought, but I think in doing so I might have made it... unhinged? ¯\\\(ツ)\/¯ It's Worse. I'm quite disappointed. Don't bother trying this unless you just want an unhinged, completely disobedient, highly intelligent and creative model. It's probably only suitable for people who are in to bizarre NSFW or really outlandish creative writing. If you find our work helpful, feel free to give us a cite.

NaNK
license:apache-2.0
1
2

eleusis-7b-alpha

NaNK
license:cc-by-nc-4.0
1
0

winter-garden-7b-gamma

NaNK
license:cc-by-nc-4.0
1
0

electric-mist-7b

NaNK
license:cc-by-nc-4.0
1
0

Meta-Llama-3-8B-Instruct-hf

NaNK
llama
1
0

QReasoner2.5-32B-Instruct

NaNK
license:apache-2.0
1
0

l3-badger-mushroom-4x8b

NaNK
llama-3
0
3

spring-chicken-8x8b

NaNK
llama-3
0
2

dragonwar-7b-alpha

NaNK
license:cc-by-nc-4.0
0
1

badger-lambda-0-llama-3-8b

NaNK
llama
0
1