djuna-test-lab
Q3-IIJAN-4B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: janhq/Jan-v1-4B Intelligent-Internet/II-Search-CIR-4B The following YAML configuration was used to produce this model:
mergekit-linear-jyaduup
Qwen2.5Plus2-0.5B-Instruct
Qwen2.5-0.5B-Instruct-ThreeFourths
TEST-L3.2-ReWish-3B
This is a merge of pre-trained language models created using mergekit. This model was merged using the linear DARE merge method using unsloth/Llama-3.2-3B as a base. The following models were included in the merge: djuna/ReWiz-Llama-3.2-3B-fix-config SicariusSicariiStuff/ImpishLLAMA3B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |22.45| |IFEval (0-Shot) |63.68| |BBH (3-Shot) |22.07| |MATH Lvl 5 (4-Shot)|12.92| |GPQA (0-shot) | 4.47| |MuSR (0-shot) | 7.92| |MMLU-PRO (5-shot) |23.62|
mergekit-linear-kdsrjwj
TEST-L3.2-ReWish-3B-ties-w-base
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using unsloth/Llama-3.2-3B as a base. The following models were included in the merge: djuna-test-lab/TEST-L3.2-3B-ReWish-8B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |22.42| |IFEval (0-Shot) |63.53| |BBH (3-Shot) |22.07| |MATH Lvl 5 (4-Shot)|12.92| |GPQA (0-shot) | 4.47| |MuSR (0-shot) | 7.92| |MMLU-PRO (5-shot) |23.62|
mergekit-passthrough-rimnmen
mergekit-linear-vpnumfg
mergekit-linear-tblwbwk
mergekit-linear-ofwfskc
This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method. The following models were included in the merge: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated allenai/Llama-3.1-Tulu-3.1-8B The following YAML configuration was used to produce this model:
mergekit-linear-nuutwfy
TEST-Ocerus-7B
QwenFocusedCoder
Qwen_Zeroed
mergekit-linear-tqwumtt
This is a merge of pre-trained language models created using mergekit. This model was merged using the Linear merge method. The following models were included in the merge: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated allenai/Llama-3.1-Tulu-3.1-8B The following YAML configuration was used to produce this model: