djuna-test-lab

20 models • 2 total models in database
Sort by:

Q3-IIJAN-4B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: janhq/Jan-v1-4B Intelligent-Internet/II-Search-CIR-4B The following YAML configuration was used to produce this model:

NaNK
11
0

mergekit-linear-jyaduup

NaNK
llama
6
0

Qwen2.5Plus2-0.5B-Instruct

NaNK
4
0

Qwen2.5-0.5B-Instruct-ThreeFourths

NaNK
4
0

TEST-L3.2-ReWish-3B

This is a merge of pre-trained language models created using mergekit. This model was merged using the linear DARE merge method using unsloth/Llama-3.2-3B as a base. The following models were included in the merge: djuna/ReWiz-Llama-3.2-3B-fix-config SicariusSicariiStuff/ImpishLLAMA3B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |22.45| |IFEval (0-Shot) |63.68| |BBH (3-Shot) |22.07| |MATH Lvl 5 (4-Shot)|12.92| |GPQA (0-shot) | 4.47| |MuSR (0-shot) | 7.92| |MMLU-PRO (5-shot) |23.62|

NaNK
llama
3
1

mergekit-linear-kdsrjwj

NaNK
llama
2
1

TEST-L3.2-ReWish-3B-ties-w-base

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using unsloth/Llama-3.2-3B as a base. The following models were included in the merge: djuna-test-lab/TEST-L3.2-3B-ReWish-8B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |22.42| |IFEval (0-Shot) |63.53| |BBH (3-Shot) |22.07| |MATH Lvl 5 (4-Shot)|12.92| |GPQA (0-shot) | 4.47| |MuSR (0-shot) | 7.92| |MMLU-PRO (5-shot) |23.62|

NaNK
llama
2
0

mergekit-passthrough-rimnmen

NaNK
2
0

mergekit-linear-vpnumfg

NaNK
llama
2
0

mergekit-linear-tblwbwk

NaNK
llama
2
0

mergekit-linear-ofwfskc

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method. The following models were included in the merge: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated allenai/Llama-3.1-Tulu-3.1-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

mergekit-linear-nuutwfy

NaNK
llama
1
1

TEST-Ocerus-7B

NaNK
1
0

QwenFocusedCoder

NaNK
1
0

Qwen_Zeroed

NaNK
1
0

mergekit-linear-tqwumtt

This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method. The following models were included in the merge: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated allenai/Llama-3.1-Tulu-3.1-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

mergekit-slerp-tqrfjcx

NaNK
llama
1
0

mergekit-linear-mwacdwp

NaNK
llama
1
0

mergekit-linear-tpmhvdf

NaNK
llama
1
0

mergekit-linear-lkqspau

NaNK
llama
1
0