grimjim
gemma-3-12b-it-abliterated-GGUF
Llama-3.1-8B-Instruct-abliterated_via_adapter-GGUF
kukulemon-7B-GGUF
gemma-3-12b-it-norm-preserved-biprojected-abliterated
gemma-3-12b-it-norm-preserved-biprojected-abliterated Projected abliteration has been applied in determining refusal direction, along with a second round of removal of projected contribution onto the harmless direction of layer targeted for intervention. Additionally, instead of subtracting/ablating away the refusal direction in toto, only the directional component of the refusal direction was removed, preserving the norms of the layers subjected to intervention. The details of norm preservation can be found in the article on Norm-Preserving Biprojected Abliteration. The net result should further reduce model damage compared to prior attempts; no subsequent fine-tuning was applied to repair damage. This model refuses far less often than the original model, yet still retains awareness of safety and harms.
Modicum-of-Doubt-v1-24B-GGUF
Llama-3-Luminurse-v0.2-OAS-8B-GGUF
Magrathic-12B-GGUF
Mistral-Small-3.2-24B-Instruct-2506
Gemma 3 12b It Projection Abliterated
Projected abliteration has been applied; no subsequent fine-tuning was applied to repair damage. The net result is a model that refuses far less often, but still retains awareness of safety and harms. This model recently benched with a WC/10 rating of 9.8 on the UGI Leaderboard, tieing for first place in compliance.
llama-3-Nephilim-v3-8B-GGUF
zephyr-wizard-kuno-royale-BF16-merge-7B-GGUF
mistralai-Mistral-Nemo-Instruct-2407
gemma-3-12b-it-abliterated
fireblossom-32K-7B-GGUF
llama-3-Nephilim-v2.1-8B-GGUF
mistralai-Mistral-Nemo-Base-2407
SauerHuatuoSkywork-o1-Llama-3.1-8B-GGUF
llama-3-Nephilim-v1-8B-GGUF
llama-3-Nephilim-v2-8B-GGUF
kukulemon-32K-7B-GGUF
kukulemon-spiked-9B-GGUF
gemma-3-12b-it-biprojected-abliterated
Projected abliteration has been applied in determining refusal direction, along with a second round of remove of projected contribution onto the harmless direction of layer targeted for intervention, which should further reduce model damage; no subsequent fine-tuning was applied to repair damage. The net result is a model that refuses far less often than the original model, yet still retains awareness of safety and harms.
Magnolia-v1-12B-GGUF
Magnolia-Mell-v1-12B-GGUF
Kitsunebi-v1-Gemma2-8k-9B-GGUF
madwind-wizard-7B-GGUF
Magnolia-v3-medis-remix-12B-GGUF
Mistral-7B-Instruct-demi-merge-v0.3-7B
Llama-3-Luminurse-v0.1-OAS-8B-GGUF
Llama-Nephilim-Metamorphosis-v1-8B-GGUF
kunoichi-lemon-royale-v3-32K-7B-GGUF
magnum-consolidatum-v1-12b
kunoichi-lemon-royale-v2-32K-7B-GGUF
kunoichi-lemon-royale-7B-GGUF
kalomaze_qwen2-7b-magpie300k_filtered_epoch2-GGUF
llama-3-experiment-v1-9B-GGUF
Magnolia-v3b-12B-GGUF
MagTie-v1-12B-GGUF
kuno-kunoichi-v1-DPO-v2-SLERP-7B
Magot-v2-Gemma2-8k-9B-GGUF
mistralai-Mistral-7B-Instruct-v0.3
rogue-enchantress-32k-7B-GGUF
kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF
Magot-v1-Gemma2-8k-9B-GGUF
kukulemon-7B
kukulemon-v3-soul_mix-32k-7B-GGUF
cuckoo-starling-32k-7B-GGUF
gemma-3-12b-pt
mistralai-Mistral-7B-v0.3
llama-3-merge-pp-instruct-8B-GGUF
Mistral-7B-Instruct-v0.2-8bit-abliterated-layer18
Nemo-Instruct-2407-MPOA-v2-12B
Modicum-of-Doubt-v1-24B-4bpw-h6-exl3
This is a quant of merge of pre-trained language models created using mergekit. Exllamav3 was used to create a quant at 4bpw with h6. With 16GB VRAM, it's possible to run 16K context at fp16 with some room to spare. The model vision component was excised from all merge contributions. Creative text generation outputs seem to trend toward the short side, sometimes to the point of feeling choppy, hence the model name. This model is not the most stellar, but the result is interesting, going against the individual tendency of the two contributing models toward longer outputs. Tested sampler settings: temperature 1.0, minP 0.02 This model was merged using the Task Arithmetic merge method using mrfakename/mistral-small-3.1-24b-base-2503-hf as a base. The following models were included in the merge: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond PocketDoc/Dans-PersonalityEngine-V1.3.0-24b The following YAML configuration was used to produce this model:
kunoichi-lemon-royale-v3-32K-7B
SauerHuatuoSkyworkDeepWatt-o1-Llama-3.1-8B
Nemo-2407-Based-Instruct-DeLERP-0.7-12B
Qwen_Qwen3-8B-exl2
Llama-3-Luminurse-v0.2-OAS-8B
llama-3-nvidia-ChatQA-1.5-8B
cuckoo-starling-32k-7B
Magrathic-12B
Magnolia-v3-medis-dilute-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: grimjim/Magnolia-v3-12B grimjim/Magnolia-v3-medis-remix-12B The following YAML configuration was used to produce this model:
Mistral-7B-Instruct-v0.2-8bit-abliterated
This model was abliterated by computing a refusal vector an 8-bit bitsandbytes quant, and then applying the vector to the full weight model. Abliteration was performed locally using a CUDA GPU, the VRAM memory consumption appeared to be constrained to be under 12GB. No additional fine-tuning was performed on these weights. Repair is required for proper use. The code used can be found on Github at https://github.com/jim-plus/llm-abliteration.
Modicum-of-Doubt-v1-24B
magnum-twilight-12b
fireblossom-32K-7B
kukulemon-32K-7B
lemonade-rebase-32k-7B-GGUF
Magnolia-v3b-12B
meta-llama-Llama-3.2-1B-Instruct-exl2
EXL2 quants of meta-llama/Llama-3.2-1B-Instruct by branch: - 40 : 4.0 bits per weight - 50 : 5.0 bits per weight - 60 : 6.0 bits per weight - 80 : 8.0 bits per weight
Magnolia-v3-12B-8bpw_h8_exl3
llama-3-experiment-v1-9B
kunoichi-lemon-royale-7B
Llama-3-Luminurse-v0.1-OAS-8B
infinite-lemonade-SLERP-7B-GGUF
Mistral-7B-Instruct-demi-merge-v0.2-7B
kunoichi-squared-model_stock-7B
wizard-elem-to-32k-7B
Daichi-Instructed-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Task Arithmetic merge method using grimjim/gemma-3-12b-pt as a base. The following models were included in the merge: grimjim/gemma-3-12b-it Delta-Vector/Daichi-12B The following YAML configuration was used to produce this model:
FranFran-Something-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Delta-Vector/Francois-Huali-12B Delta-Vector/Francois-PE-V2-Huali-12B The following YAML configuration was used to produce this model:
rogue-enchantress-32k-7B
Llama-3-Perky-Pat-Instruct-8B
BadApple-o1-Llama-3.1-8B
MagnaRei-v2-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: grimjim/Magnolia-v3-12B Delta-Vector/Rei-V2-12B The following YAML configuration was used to produce this model:
Magnolia-v3-medis-remix-12B
llama-3-sthenic-porpoise-v1-8B
MagTie-v1-12B
kukulemon-spiked-9B
Kitsunebi-v1-Gemma2-8k-9B
MagnaMellRei-v1-12B
infinite-lemonade-SLERP-7B
koboldai-holodeck-extended-32k-7B
Llama-3-Instruct-abliteration-OVA-8B
Gemma2-Nephilim-v3-9B
franken-kunoichi-IDUS-11B
Llama-3-Steerpike-v1-OAS-8B
Llama-3.1-Instruct-abliterated-Nephilim_v3_via_adapter-8B
Magot-v3-Gemma2-8k-9B
Magnolia-v9-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Task Arithmetic merge method using grimjim/mistralai-Mistral-Nemo-Base-2407 as a base. The following models were included in the merge: grimjim/mistralai-Mistral-Nemo-Instruct-2407 nbeerbower/Mistral-Nemo-Prism-12B inflatebot/MN-12B-Mag-Mell-R1 grimjim/magnum-consolidatum-v1-12b grimjim/magnum-twilight-12b The following YAML configuration was used to produce this model:
fireblossom-32K-7B-8.0bpw_h8_exl2
llama-3-merge-pp-instruct-8B
Llama-3-Instruct-demi-merge-8B
Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2
llama-3-Nephilim-v1-8B-6.5bpw_h6_exl2
gemma-3-12b-it
Llama-3-Instruct-abliteration-LoRA-8B
Llama-3.1-SuperNova-Lite-lorabilterated-8B
kunoichi-lemon-royale-v2-32K-7B
Llama-3-Oasis-v1-OAS-8B
Mistral-Nemo-Instruct-2407-12B-6.4bpw-exl2
madwind-wizard-7B
koboldai-erebus-extended-32k-7B
kukulemon-v3-soul_mix-32k-7B
Magnolia-Mell-v1-12B
This is a merge of pre-trained language models created using mergekit. An asymmetric gradient SLERP was used to lightly apply MN-12B-Mag-Mell-R1 to Magnolia-v3-12B. Tested for narrative text completion with temperature=1.0 and minP=0.02. Coherence is fairly high, though there may be occasional slips. If repetition is a problem, raising temperature briefly may help. The model appears to tolerate temperature=2.0 even. This model was merged using the SLERP merge method. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 grimjim/Magnolia-v3-12B The following YAML configuration was used to produce this model:
MagSoup-v1-12B
llama-3-merge-virt-req-8B
lemon07r_Gemma-2-Ataraxy-v4c-9B_fixed
Magnolia-v10-12B
phi-4-MPOA-experiment
Mistral-Starling-merge-trial1-7B
kunoichi-lemon-royale-7B-8.0bpw_h8_exl2
Mistral-Starling-merge-trial3-7B
zephyr-beta-wizardLM-2-merge-7B
zephyr-wizard-kuno-royale-BF16-merge-7B
Llama-3-experimental-merge-trial1-8B
kukuspice-7B
Llama-Nephilim-Metamorphosis-v1-8B
Llama-3-Instruct-Nephilim-v3-LoRA-8B
Llama-3.1-Supernova-Lite-Instruct-merge-abliterated-8B
Magnolia-v4-Gemma2-8k-9B
MagnaRei-v1-12B
Magnolia-v6-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Task Arithmetic merge method using grimjim/mistralai-Mistral-Nemo-Base-2407 as a base. The following models were included in the merge: grimjim/magnum-twilight-12b grimjim/magnum-consolidatum-v1-12b Delta-Vector/Rei-V2-12B TheDrummer/Rocinante-12B-v1.1 Nitral-AI/CaptainBMO-12B grimjim/mistralai-Mistral-Nemo-Instruct-2407 nbeerbower/Mistral-Nemo-Prism-12B The following YAML configuration was used to produce this model:
Magnolia-v7-12B
Magnolia-v3a-12B
kunoichi-lemon-royale-v2experiment1-32K-7B
This is a merge of pre-trained language models created using mergekit. The result appears to be a successful adapatation to the v0.3 tokenizer, with the resulting model being coherent, although there is some evident damage. This model was merged using the SLERP merge method. The following models were included in the merge: grimjim/mistralai-Mistral-7B-Instruct-v0.3 grimjim/kunoichi-lemon-royale-v2ext-32K-7B The following YAML configuration was used to produce this model: