djuna

86 models • 15 total models in database
Sort by:

jina-embeddings-v2-small-en-Q5_K_M-GGUF

llama-cpp
199
0

MN-Chinofun-12B-4.1-Q6_K-GGUF

NaNK
llama-cpp
109
1

ReWiz-Llama-3.2-3B-fix-config

NaNK
llama
26
0

jina-embeddings-v2-base-en-Q5_K_M-GGUF

NaNK
llama-cpp
16
2

Gemma-2-gemmama-9b-Q4_K_S-GGUF

NaNK
llama-cpp
11
1

Q3-IIJAN-3B-Q8_0-GGUF

djuna/Q3-IIJAN-3B-Q80-GGUF This model was converted to GGUF format from `djuna-test-lab/Q3-IIJAN-3B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
11
0

jina-embeddings-v2-base-code-Q5_K_M-GGUF

NaNK
llama-cpp
10
1

MN-Chinofun-12B-4-Q6_K-GGUF

NaNK
llama-cpp
10
1

TEST2-Q2.5-Lenned-14B-Q5_K_M-GGUF

NaNK
llama-cpp
10
1

G2-GSHT-32K-Q6_K-GGUF

llama-cpp
10
0

Q2.5-Veltha-14B-0.5

This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using arcee-ai/SuperNova-Medius as a base. The following models were included in the merge: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 allura-org/TQ2.5-14B-Aletheia-v1 EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 v000000/Qwen2.5-Lumen-14B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |39.96| |IFEval (0-Shot) |77.96| |BBH (3-Shot) |50.32| |MATH Lvl 5 (4-Shot)|33.84| |GPQA (0-shot) |15.77| |MuSR (0-shot) |14.17| |MMLU-PRO (5-shot) |47.72|

NaNK
8
11

L3.1-Boshima-b-FIX-calc-Q5_K_M-GGUF

llama-cpp
8
0

L3.1-Noraian-Q5_K_M-GGUF

llama-cpp
8
0

TEST3-Q2.5-Lenned-14B-Q5_K_M-GGUF

NaNK
llama-cpp
8
0

L3.1-Suze-Vume-calc-Q5_K_M-GGUF

llama-cpp
7
1

Gemma-2-gemmama-9b-Q5_K_M-GGUF

NaNK
llama-cpp
7
1

stella-base-en-v2-Q5_K_M-GGUF

NaNK
llama-cpp
7
0

G2-Noranum-27B-Q3_K_S-GGUF

NaNK
llama-cpp
7
0

MN-Chinofun-12B-4.1

NaNK
6
5

Gemma-2-gemmama-9b

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using IlyaGusev/gemma-2-9b-it-abliterated as a base. The following models were included in the merge: crestf411/gemstone-9b BAAI/Gemma2-9B-IT-Simpo-Infinity-Preference lemon07r/Gemma-2-Ataraxy-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |25.54| |IFEval (0-Shot) |77.03| |BBH (3-Shot) |32.92| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) |11.41| |MuSR (0-shot) | 8.46| |MMLU-PRO (5-shot) |23.44|

NaNK
6
3

L3.1-ForStHS

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using vicgalle/Configurable-Llama-3.1-8B-Instruct as a base. The following models were included in the merge: rityak/L3.1-FormaxGradient ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 + grimjim/Llama-3-Instruct-abliteration-LoRA-8B DreadPoor/HeartStolen-8B-ModelStock The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |28.00| |IFEval (0-Shot) |78.13| |BBH (3-Shot) |31.39| |MATH Lvl 5 (4-Shot)|12.92| |GPQA (0-shot) | 5.48| |MuSR (0-shot) | 9.66| |MMLU-PRO (5-shot) |30.39|

NaNK
llama
6
3

MN-Chinofun-12B-2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2 as a base. The following models were included in the merge: grimjim/magnum-consolidatum-v1-12b spow12/ChatWaifuv1.4 GalrionSoftworks/Canidori-12B-v1 Nohobby/MN-12B-Siskin-v0.2 RozGrov/NemoDori-v0.2.2-12B-MN-ties The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |25.37| |IFEval (0-Shot) |61.71| |BBH (3-Shot) |29.53| |MATH Lvl 5 (4-Shot)|11.18| |GPQA (0-shot) | 7.38| |MuSR (0-shot) |13.35| |MMLU-PRO (5-shot) |29.06|

NaNK
6
3

L3.1-Purosani-Q5_K_M-GGUF

llama-cpp
6
1

Qwen2-1.5B-Instruct-orpo-Q8_0-GGUF

NaNK
llama-cpp
6
0

G2-GSHT-Q4_K_S-GGUF

llama-cpp
6
0

Q2.5-Partron-7B-Q5_K_M-GGUF

NaNK
llama-cpp
6
0

MN-Chinofun-12B-4-4bit

NaNK
6
0

MN-Chinofun-12B-3

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3 as a base. The following models were included in the merge: grimjim/magnum-twilight-12b Nohobby/MN-12B-Siskin-v0.2 RozGrov/NemoDori-v0.2.2-12B-MN-ties spow12/ChatWaifuv1.4 GalrionSoftworks/Canidori-12B-v1 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |18.16| |IFEval (0-Shot) |30.53| |BBH (3-Shot) |34.22| |MATH Lvl 5 (4-Shot)| 8.69| |GPQA (0-shot) | 2.13| |MuSR (0-shot) |10.91| |MMLU-PRO (5-shot) |22.51|

NaNK
5
2

L3.1-Suze-Vume-2-calc

NaNK
llama
5
1

DeepSeek-R1-Distill-Qwen-14B-abliterated-remap

NaNK
5
1

Artigenz-Coder-DS-6.7B-Q5_K_M-GGUF

NaNK
llama-cpp
5
0

MN-Chinofun-12B-4

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3 as a base. The following models were included in the merge: spow12/ChatWaifuv1.4 GalrionSoftworks/Canidori-12B-v1 RozGrov/NemoDori-v0.2.2-12B-MN-ties grimjim/magnum-twilight-12b Nitral-AI/WayfarerErisNoctis-12B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |24.26| |IFEval (0-Shot) |54.04| |BBH (3-Shot) |34.17| |MATH Lvl 5 (4-Shot)|10.35| |GPQA (0-shot) | 6.04| |MuSR (0-shot) |13.23| |MMLU-PRO (5-shot) |27.75|

NaNK
4
4

L3.1-ForStHS-Q5_K_M-GGUF

llama-cpp
4
1

MN-Chinofun-Q4_K_M-GGUF

llama-cpp
4
1

L3.1-Promissum_Mane-8B-Della-calc-Q5_K_M-GGUF

NaNK
llama-cpp
4
1

G2-GSHT

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Gemmasutra-9B-v1 Nekuromento/Hematoma-Gemma-Model-Stock-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.95| |IFEval (0-Shot) |56.30| |BBH (3-Shot) |30.99| |MATH Lvl 5 (4-Shot)| 3.17| |GPQA (0-shot) |10.07| |MuSR (0-shot) | 8.17| |MMLU-PRO (5-shot) |23.00|

NaNK
4
0

L3.1-15B-EtherealMaid-t0.0001-alpha-Q4_K_S-GGUF

NaNK
llama-cpp
4
0

L3.1-Purosani-1.5-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

MN-Miuryra-18B-Q3_K_M-GGUF

NaNK
llama-cpp
4
0

MN-Lulanum-12B-FIX-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

mergekit-linear-lyapgfy-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

mergekit-dare_linear-xaazcaj-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

mergekit-della_linear-qaxucjoo-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Q2.5-Fuetron-7B-Q6_K-GGUF

NaNK
llama-cpp
4
0

Q2.5-Veltha-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using qwen/Qwen2.5-14b as a base. The following models were included in the merge: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 v000000/Qwen2.5-Lumen-14B arcee-ai/SuperNova-Medius allura-org/TQ2.5-14B-Aletheia-v1 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |39.21| |IFEval (0-Shot) |82.92| |BBH (3-Shot) |49.75| |MATH Lvl 5 (4-Shot)|28.02| |GPQA (0-shot) |14.54| |MuSR (0-shot) |12.26| |MMLU-PRO (5-shot) |47.76|

NaNK
3
11

L3.1-Noraian

NaNK
llama
3
3

L3.1-Promissum_Mane-8B-Della-1.5-calc

This is a merge of pre-trained language models created using mergekit. This model was merged using the della merge method using unsloth/Meta-Llama-3.1-8B as a base. The following models were included in the merge: DreadPoor/SpeiMeridiem-8B-modelstock DreadPoor/Aspire1.1-8B-modelstock DreadPoor/HeartStolen1.1-8B-ModelStock The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |29.18| |IFEval (0-Shot) |72.35| |BBH (3-Shot) |34.88| |MATH Lvl 5 (4-Shot)|13.97| |GPQA (0-shot) | 8.61| |MuSR (0-shot) |13.03| |MMLU-PRO (5-shot) |32.26|

NaNK
llama
3
2

Q2.5-Fuppavy-7B

NaNK
3
2

L3.1-8B-RPGramMax-Q5_K_M-GGUF

NaNK
llama-cpp
3
1

L3.1-Promissum_Mane-8B-Della-calc

This is a merge of pre-trained language models created using mergekit. This model was merged using the della merge method using unsloth/Meta-Llama-3.1-8B as a base. The following models were included in the merge: DreadPoor/SpeiMeridiem-8B-modelstock DreadPoor/HeartStolen1.1-8B-ModelStock DreadPoor/Aspire1.1-8B-modelstock The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |23.42| |IFEval (0-Shot) |54.42| |BBH (3-Shot) |35.55| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 6.60| |MuSR (0-shot) |12.81| |MMLU-PRO (5-shot) |31.13|

NaNK
llama
3
1

MN-Miuryra-18B

NaNK
3
1

Q2.5-Veltha-14B-0.5-Q5_K_M-GGUF

djuna/Q2.5-Veltha-14B-0.5-Q5KM-GGUF This model was converted to GGUF format from `djuna/Q2.5-Veltha-14B-0.5` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
1

TEST3-Q2.5-Lenned-14B

NaNK
3
1

Qwen2.5-7B-Anvita-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

GSHT-GEMMAMA-16B-Q3_K_M-GGUF

NaNK
llama-cpp
3
0

Q2.5-Fuppavy-7B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

MT-Gen3-gemma-2-9B-Flip-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

L3.1-Romes-Ninomos

NaNK
llama
2
3

MN-Chinofun

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1 as a base. The following models were included in the merge: RozGrov/NemoDori-v0.2.2-12B-MN-ties spow12/ChatWaifuv1.4 Nohobby/MN-12B-Siskin-v0.2 GalrionSoftworks/Canidori-12B-v1 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |24.26| |IFEval (0-Shot) |61.10| |BBH (3-Shot) |28.48| |MATH Lvl 5 (4-Shot)|10.50| |GPQA (0-shot) | 6.15| |MuSR (0-shot) |10.38| |MMLU-PRO (5-shot) |28.92|

NaNK
2
3

L3.1-Purosani-2-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using unsloth/Meta-Llama-3.1-8B as a base. The following models were included in the merge: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 arcee-ai/Llama-3.1-SuperNova-Lite + grimjim/Llama-3-Instruct-abliteration-LoRA-8B THUDM/LongWriter-llama3.1-8b + ResplendentAI/SmartsLlama3 djuna/L3.1-Suze-Vume-2-calc djuna/L3.1-ForStHS + Blackroot/Llama-3-8B-Abomination-LORA The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |22.85| |IFEval (0-Shot) |49.88| |BBH (3-Shot) |31.39| |MATH Lvl 5 (4-Shot)|10.12| |GPQA (0-shot) | 6.82| |MuSR (0-shot) | 8.30| |MMLU-PRO (5-shot) |30.57|

NaNK
llama
2
3

DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-remap

NaNK
2
2

L3.1-Suze-Vume-calc

This is a merge of pre-trained language models created using mergekit. This model was merged using the linear DARE merge method using Orenguteng/Llama-3.1-8B-Lexi-Uncensored as a base. The following models were included in the merge: djuna/L3-Suze-Vume The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |25.75| |IFEval (0-Shot) |72.97| |BBH (3-Shot) |31.14| |MATH Lvl 5 (4-Shot)| 9.89| |GPQA (0-shot) | 4.25| |MuSR (0-shot) | 8.30| |MMLU-PRO (5-shot) |27.94|

NaNK
llama
2
1

TEST-Ocerus-7B-Q5_K_M-GGUF

NaNK
llama-cpp
2
1

MS-Nudion-22B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: SteelSkull/MSM-MS-Cydrion-22B knifeayumu/Cydonia-v1.3-Magnum-v4-22B The following YAML configuration was used to produce this model:

NaNK
2
1

Q2.5-Veltha-14B-Q5_K_M-GGUF

NaNK
llama-cpp
2
1

TEST-Q2.5-Lenned-14B

NaNK
2
1

G2-BigGSHT-27B-2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using AALF/gemma-2-27b-it-SimPO-37K-100steps as a base. The following models were included in the merge: unsloth/gemma-2-27b-it djuna/G2-BigGSHT-27B-calc The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.13| |IFEval (0-Shot) |79.74| |BBH (3-Shot) |48.81| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) |15.10| |MuSR (0-shot) | 9.93| |MMLU-PRO (5-shot) |39.20|

NaNK
2
0

G2-Noranum-27B

NaNK
2
0

DeepSeek-R1-0528-Qwen3-8B-remap

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method using deepseek-ai/DeepSeek-R1-0528-Qwen3-8B as a base. The following models were included in the merge: Qwen/Qwen3-8B The following YAML configuration was used to produce this model:

NaNK
2
0

L3.1-gramamax

NaNK
llama
1
2

L3.1-RPGramMax-2.5

NaNK
llama
1
2

L3.1-Boshima-b-FIX-calc

NaNK
llama
1
2

TEST-OcerusBeam-7B-Q5_K_M-GGUF

NaNK
llama-cpp
1
1

Qwen2-2B-RHSD-nulled

NaNK
1
0

Cathallama-70B-128K

NaNK
llama
1
0

Q2.5-Partron-7B

This is a merge of pre-trained language models created using mergekit. This model was merged using the della merge method using djuna/Q2.5-Fuppavy-7B as a base. The following models were included in the merge: Locutusque/StockQwen-2.5-7B fblgit/cybertron-v4-qw7B-MGS happzy2633/qwen2.5-7b-ins-v3 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |27.08| |IFEval (0-Shot) |73.21| |BBH (3-Shot) |35.26| |MATH Lvl 5 (4-Shot)| 0.08| |GPQA (0-shot) | 6.38| |MuSR (0-shot) |11.07| |MMLU-PRO (5-shot) |36.47|

NaNK
1
0

qwen2.5-11B-Mzy-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

Q2.5-Veltha-14B-0.5-AWQ-4bit

NaNK
1
0

TEST2-Q2.5-Lenned-14B

NaNK
0
4

G2-GSHT-32K

NaNK
0
1

L3.1-Purosani

NaNK
llama
0
1

L3.1-RPganoff-8B-B

NaNK
llama
0
1

L3.1-Purosani-1.5-8B

NaNK
llama
0
1

MN-Lulanum-12B-FIX

NaNK
0
1

G2-Nowing-9B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge2-MU-gemma-2-MTg2MT1g2-9B allknowingroger/GemmaSlerp5-10B The following YAML configuration was used to produce this model:

NaNK
0
1

G2-Nowing-9B-32K-YS

NaNK
0
1