grimjim

138 models • 1 total models in database
Sort by:

gemma-3-12b-it-abliterated-GGUF

NaNK
626
2

Llama-3.1-8B-Instruct-abliterated_via_adapter-GGUF

NaNK
base_model:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
467
29

kukulemon-7B-GGUF

NaNK
license:cc-by-nc-4.0
308
2

gemma-3-12b-it-norm-preserved-biprojected-abliterated

gemma-3-12b-it-norm-preserved-biprojected-abliterated Projected abliteration has been applied in determining refusal direction, along with a second round of removal of projected contribution onto the harmless direction of layer targeted for intervention. Additionally, instead of subtracting/ablating away the refusal direction in toto, only the directional component of the refusal direction was removed, preserving the norms of the layers subjected to intervention. The details of norm preservation can be found in the article on Norm-Preserving Biprojected Abliteration. The net result should further reduce model damage compared to prior attempts; no subsequent fine-tuning was applied to repair damage. This model refuses far less often than the original model, yet still retains awareness of safety and harms.

NaNK
250
19

Modicum-of-Doubt-v1-24B-GGUF

NaNK
license:apache-2.0
217
0

Llama-3-Luminurse-v0.2-OAS-8B-GGUF

NaNK
base_model:grimjim/Llama-3-Luminurse-v0.2-OAS-8B
168
2

Magrathic-12B-GGUF

NaNK
license:cc-by-nc-4.0
135
0

Mistral-Small-3.2-24B-Instruct-2506

NaNK
license:apache-2.0
119
0

Gemma 3 12b It Projection Abliterated

Projected abliteration has been applied; no subsequent fine-tuning was applied to repair damage. The net result is a model that refuses far less often, but still retains awareness of safety and harms. This model recently benched with a WC/10 rating of 9.8 on the UGI Leaderboard, tieing for first place in compliance.

NaNK
88
7

llama-3-Nephilim-v3-8B-GGUF

NaNK
base_model:grimjim/llama-3-Nephilim-v3-8B
83
12

zephyr-wizard-kuno-royale-BF16-merge-7B-GGUF

NaNK
license:cc-by-nc-4.0
69
2

mistralai-Mistral-Nemo-Instruct-2407

license:apache-2.0
60
0

gemma-3-12b-it-abliterated

NaNK
60
0

fireblossom-32K-7B-GGUF

NaNK
license:cc-by-nc-4.0
59
0

llama-3-Nephilim-v2.1-8B-GGUF

NaNK
base_model:grimjim/llama-3-Nephilim-v2.1-8B
58
1

mistralai-Mistral-Nemo-Base-2407

license:apache-2.0
57
0

SauerHuatuoSkywork-o1-Llama-3.1-8B-GGUF

NaNK
base_model:grimjim/SauerHuatuoSkywork-o1-Llama-3.1-8B
53
0

llama-3-Nephilim-v1-8B-GGUF

NaNK
base_model:grimjim/llama-3-Nephilim-v1-8B
52
1

llama-3-Nephilim-v2-8B-GGUF

NaNK
base_model:grimjim/llama-3-Nephilim-v2-8B
48
1

kukulemon-32K-7B-GGUF

NaNK
license:cc-by-nc-4.0
45
1

kukulemon-spiked-9B-GGUF

NaNK
license:cc-by-nc-4.0
44
4

gemma-3-12b-it-biprojected-abliterated

Projected abliteration has been applied in determining refusal direction, along with a second round of remove of projected contribution onto the harmless direction of layer targeted for intervention, which should further reduce model damage; no subsequent fine-tuning was applied to repair damage. The net result is a model that refuses far less often than the original model, yet still retains awareness of safety and harms.

NaNK
43
4

Magnolia-v1-12B-GGUF

NaNK
license:apache-2.0
38
1

Magnolia-Mell-v1-12B-GGUF

NaNK
37
1

Kitsunebi-v1-Gemma2-8k-9B-GGUF

NaNK
35
1

madwind-wizard-7B-GGUF

NaNK
license:cc-by-nc-4.0
34
1

Magnolia-v3-medis-remix-12B-GGUF

NaNK
license:apache-2.0
33
2

Mistral-7B-Instruct-demi-merge-v0.3-7B

NaNK
license:apache-2.0
32
0

Llama-3-Luminurse-v0.1-OAS-8B-GGUF

NaNK
base_model:grimjim/Llama-3-Luminurse-v0.1-OAS-8B
31
1

Llama-Nephilim-Metamorphosis-v1-8B-GGUF

NaNK
base_model:grimjim/Llama-Nephilim-Metamorphosis-v1-8B
31
1

kunoichi-lemon-royale-v3-32K-7B-GGUF

NaNK
license:cc-by-nc-4.0
30
3

magnum-consolidatum-v1-12b

NaNK
license:apache-2.0
27
2

kunoichi-lemon-royale-v2-32K-7B-GGUF

NaNK
license:cc-by-nc-4.0
25
4

kunoichi-lemon-royale-7B-GGUF

NaNK
license:cc-by-nc-4.0
25
2

kalomaze_qwen2-7b-magpie300k_filtered_epoch2-GGUF

NaNK
24
1

llama-3-experiment-v1-9B-GGUF

NaNK
llama
24
0

Magnolia-v3b-12B-GGUF

NaNK
license:apache-2.0
24
0

MagTie-v1-12B-GGUF

NaNK
license:apache-2.0
22
4

kuno-kunoichi-v1-DPO-v2-SLERP-7B

NaNK
license:cc-by-nc-4.0
18
4

Magot-v2-Gemma2-8k-9B-GGUF

NaNK
18
1

mistralai-Mistral-7B-Instruct-v0.3

NaNK
license:apache-2.0
17
3

rogue-enchantress-32k-7B-GGUF

NaNK
license:cc-by-nc-4.0
14
1

kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF

NaNK
license:cc-by-nc-4.0
13
2

Magot-v1-Gemma2-8k-9B-GGUF

NaNK
12
1

kukulemon-7B

NaNK
license:cc-by-nc-4.0
11
11

kukulemon-v3-soul_mix-32k-7B-GGUF

NaNK
license:cc-by-nc-4.0
11
2

cuckoo-starling-32k-7B-GGUF

NaNK
license:cc-by-nc-4.0
11
1

gemma-3-12b-pt

NaNK
10
0

mistralai-Mistral-7B-v0.3

NaNK
license:apache-2.0
9
3

llama-3-merge-pp-instruct-8B-GGUF

NaNK
base_model:grimjim/llama-3-merge-pp-instruct-8B
9
1

Mistral-7B-Instruct-v0.2-8bit-abliterated-layer18

NaNK
license:apache-2.0
9
0

Nemo-Instruct-2407-MPOA-v2-12B

NaNK
license:apache-2.0
7
1

Modicum-of-Doubt-v1-24B-4bpw-h6-exl3

This is a quant of merge of pre-trained language models created using mergekit. Exllamav3 was used to create a quant at 4bpw with h6. With 16GB VRAM, it's possible to run 16K context at fp16 with some room to spare. The model vision component was excised from all merge contributions. Creative text generation outputs seem to trend toward the short side, sometimes to the point of feeling choppy, hence the model name. This model is not the most stellar, but the result is interesting, going against the individual tendency of the two contributing models toward longer outputs. Tested sampler settings: temperature 1.0, minP 0.02 This model was merged using the Task Arithmetic merge method using mrfakename/mistral-small-3.1-24b-base-2503-hf as a base. The following models were included in the merge: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond PocketDoc/Dans-PersonalityEngine-V1.3.0-24b The following YAML configuration was used to produce this model:

NaNK
exllamav3
7
0

kunoichi-lemon-royale-v3-32K-7B

NaNK
license:cc-by-nc-4.0
6
5

SauerHuatuoSkyworkDeepWatt-o1-Llama-3.1-8B

NaNK
llama
6
2

Nemo-2407-Based-Instruct-DeLERP-0.7-12B

NaNK
license:apache-2.0
6
0

Qwen_Qwen3-8B-exl2

NaNK
exllamav2
6
0

Llama-3-Luminurse-v0.2-OAS-8B

NaNK
llama
5
6

llama-3-nvidia-ChatQA-1.5-8B

NaNK
llama
5
4

cuckoo-starling-32k-7B

NaNK
license:cc-by-nc-4.0
5
3

Magrathic-12B

NaNK
license:cc-by-nc-4.0
5
1

Magnolia-v3-medis-dilute-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: grimjim/Magnolia-v3-12B grimjim/Magnolia-v3-medis-remix-12B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
5
1

Mistral-7B-Instruct-v0.2-8bit-abliterated

This model was abliterated by computing a refusal vector an 8-bit bitsandbytes quant, and then applying the vector to the full weight model. Abliteration was performed locally using a CUDA GPU, the VRAM memory consumption appeared to be constrained to be under 12GB. No additional fine-tuning was performed on these weights. Repair is required for proper use. The code used can be found on Github at https://github.com/jim-plus/llm-abliteration.

NaNK
license:apache-2.0
5
1

Modicum-of-Doubt-v1-24B

NaNK
license:apache-2.0
5
0

magnum-twilight-12b

NaNK
license:apache-2.0
4
8

fireblossom-32K-7B

NaNK
license:cc-by-nc-4.0
4
3

kukulemon-32K-7B

NaNK
license:cc-by-nc-4.0
4
1

lemonade-rebase-32k-7B-GGUF

NaNK
license:cc-by-4.0
4
1

Magnolia-v3b-12B

NaNK
license:apache-2.0
4
1

meta-llama-Llama-3.2-1B-Instruct-exl2

EXL2 quants of meta-llama/Llama-3.2-1B-Instruct by branch: - 40 : 4.0 bits per weight - 50 : 5.0 bits per weight - 60 : 6.0 bits per weight - 80 : 8.0 bits per weight

NaNK
base_model:meta-llama/Llama-3.2-1B-Instruct
4
0

Magnolia-v3-12B-8bpw_h8_exl3

NaNK
exllama3
4
0

llama-3-experiment-v1-9B

NaNK
llama
3
5

kunoichi-lemon-royale-7B

NaNK
license:cc-by-nc-4.0
3
3

Llama-3-Luminurse-v0.1-OAS-8B

NaNK
llama
3
3

infinite-lemonade-SLERP-7B-GGUF

NaNK
license:cc-by-4.0
3
1

Mistral-7B-Instruct-demi-merge-v0.2-7B

NaNK
license:apache-2.0
3
1

kunoichi-squared-model_stock-7B

NaNK
license:cc-by-nc-4.0
3
1

wizard-elem-to-32k-7B

NaNK
license:apache-2.0
3
0

Daichi-Instructed-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Task Arithmetic merge method using grimjim/gemma-3-12b-pt as a base. The following models were included in the merge: grimjim/gemma-3-12b-it Delta-Vector/Daichi-12B The following YAML configuration was used to produce this model:

NaNK
3
0

FranFran-Something-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Delta-Vector/Francois-Huali-12B Delta-Vector/Francois-PE-V2-Huali-12B The following YAML configuration was used to produce this model:

NaNK
3
0

rogue-enchantress-32k-7B

NaNK
license:cc-by-nc-4.0
2
9

Llama-3-Perky-Pat-Instruct-8B

NaNK
llama
2
4

BadApple-o1-Llama-3.1-8B

NaNK
llama
2
1

MagnaRei-v2-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: grimjim/Magnolia-v3-12B Delta-Vector/Rei-V2-12B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
2
1

Magnolia-v3-medis-remix-12B

NaNK
license:apache-2.0
2
1

llama-3-sthenic-porpoise-v1-8B

NaNK
llama
2
0

MagTie-v1-12B

NaNK
license:apache-2.0
1
4

kukulemon-spiked-9B

NaNK
license:cc-by-nc-4.0
1
3

Kitsunebi-v1-Gemma2-8k-9B

NaNK
1
3

MagnaMellRei-v1-12B

NaNK
license:apache-2.0
1
3

infinite-lemonade-SLERP-7B

NaNK
1
2

koboldai-holodeck-extended-32k-7B

NaNK
license:apache-2.0
1
2

Llama-3-Instruct-abliteration-OVA-8B

NaNK
llama
1
2

Gemma2-Nephilim-v3-9B

NaNK
1
2

franken-kunoichi-IDUS-11B

NaNK
license:cc-by-nc-4.0
1
1

Llama-3-Steerpike-v1-OAS-8B

NaNK
llama
1
1

Llama-3.1-Instruct-abliterated-Nephilim_v3_via_adapter-8B

NaNK
llama
1
1

Magot-v3-Gemma2-8k-9B

NaNK
1
1

Magnolia-v9-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Task Arithmetic merge method using grimjim/mistralai-Mistral-Nemo-Base-2407 as a base. The following models were included in the merge: grimjim/mistralai-Mistral-Nemo-Instruct-2407 nbeerbower/Mistral-Nemo-Prism-12B inflatebot/MN-12B-Mag-Mell-R1 grimjim/magnum-consolidatum-v1-12b grimjim/magnum-twilight-12b The following YAML configuration was used to produce this model:

NaNK
1
1

fireblossom-32K-7B-8.0bpw_h8_exl2

NaNK
license:cc-by-nc-4.0
1
0

llama-3-merge-pp-instruct-8B

NaNK
llama
1
0

Llama-3-Instruct-demi-merge-8B

NaNK
llama
1
0

Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2

NaNK
llama
1
0

llama-3-Nephilim-v1-8B-6.5bpw_h6_exl2

NaNK
llama
1
0

gemma-3-12b-it

NaNK
1
0

Llama-3-Instruct-abliteration-LoRA-8B

NaNK
base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
0
9

Llama-3.1-SuperNova-Lite-lorabilterated-8B

NaNK
llama
0
7

kunoichi-lemon-royale-v2-32K-7B

NaNK
license:cc-by-nc-4.0
0
5

Llama-3-Oasis-v1-OAS-8B

NaNK
llama
0
5

Mistral-Nemo-Instruct-2407-12B-6.4bpw-exl2

NaNK
license:apache-2.0
0
5

madwind-wizard-7B

NaNK
license:cc-by-nc-4.0
0
4

koboldai-erebus-extended-32k-7B

NaNK
license:apache-2.0
0
4

kukulemon-v3-soul_mix-32k-7B

NaNK
license:cc-by-nc-4.0
0
4

Magnolia-Mell-v1-12B

This is a merge of pre-trained language models created using mergekit. An asymmetric gradient SLERP was used to lightly apply MN-12B-Mag-Mell-R1 to Magnolia-v3-12B. Tested for narrative text completion with temperature=1.0 and minP=0.02. Coherence is fairly high, though there may be occasional slips. If repetition is a problem, raising temperature briefly may help. The model appears to tolerate temperature=2.0 even. This model was merged using the SLERP merge method. The following models were included in the merge: inflatebot/MN-12B-Mag-Mell-R1 grimjim/Magnolia-v3-12B The following YAML configuration was used to produce this model:

NaNK
0
4

MagSoup-v1-12B

NaNK
0
3

llama-3-merge-virt-req-8B

NaNK
llama
0
2

lemon07r_Gemma-2-Ataraxy-v4c-9B_fixed

NaNK
0
2

Magnolia-v10-12B

NaNK
0
2

phi-4-MPOA-experiment

0
1

Mistral-Starling-merge-trial1-7B

NaNK
license:apache-2.0
0
1

kunoichi-lemon-royale-7B-8.0bpw_h8_exl2

NaNK
license:cc-by-nc-4.0
0
1

Mistral-Starling-merge-trial3-7B

NaNK
license:apache-2.0
0
1

zephyr-beta-wizardLM-2-merge-7B

NaNK
license:apache-2.0
0
1

zephyr-wizard-kuno-royale-BF16-merge-7B

NaNK
license:cc-by-nc-4.0
0
1

Llama-3-experimental-merge-trial1-8B

NaNK
llama
0
1

kukuspice-7B

NaNK
license:cc-by-nc-4.0
0
1

Llama-Nephilim-Metamorphosis-v1-8B

NaNK
llama
0
1

Llama-3-Instruct-Nephilim-v3-LoRA-8B

NaNK
base_model:grimjim/llama-3-Nephilim-v3-8B
0
1

Llama-3.1-Supernova-Lite-Instruct-merge-abliterated-8B

NaNK
llama
0
1

Magnolia-v4-Gemma2-8k-9B

NaNK
0
1

MagnaRei-v1-12B

NaNK
license:apache-2.0
0
1

Magnolia-v6-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Task Arithmetic merge method using grimjim/mistralai-Mistral-Nemo-Base-2407 as a base. The following models were included in the merge: grimjim/magnum-twilight-12b grimjim/magnum-consolidatum-v1-12b Delta-Vector/Rei-V2-12B TheDrummer/Rocinante-12B-v1.1 Nitral-AI/CaptainBMO-12B grimjim/mistralai-Mistral-Nemo-Instruct-2407 nbeerbower/Mistral-Nemo-Prism-12B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
0
1

Magnolia-v7-12B

NaNK
license:apache-2.0
0
1

Magnolia-v3a-12B

NaNK
license:apache-2.0
0
1

kunoichi-lemon-royale-v2experiment1-32K-7B

This is a merge of pre-trained language models created using mergekit. The result appears to be a successful adapatation to the v0.3 tokenizer, with the resulting model being coherent, although there is some evident damage. This model was merged using the SLERP merge method. The following models were included in the merge: grimjim/mistralai-Mistral-7B-Instruct-v0.3 grimjim/kunoichi-lemon-royale-v2ext-32K-7B The following YAML configuration was used to produce this model:

NaNK
license:cc-by-nc-4.0
0
1

kunoichi-lemon-royale-hamansu-v1-32k-7B

NaNK
license:cc-by-nc-4.0
0
1

kunoichi-lemon-royale-v2experiment2-32K-7B

NaNK
license:cc-by-nc-4.0
0
1

Magnolia-v8-12B

NaNK
0
1