zelk12
26_05_2025_Test_LazyMergekit_gemma-3-12B
MT4-gemma-3-12B
gemma-3-tiny-random-Q6_K-GGUF
MT-Merge6-gemma-2-9B-Q6_K-GGUF
MT4-Gen2-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT4-Gen2-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-Gen2-IMM-gemma-2-9B zelk12/MT4-Gen2-GBMAMU-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |33.79| |IFEval (0-Shot) |80.51| |BBH (3-Shot) |44.18| |MATH Lvl 5 (4-Shot)|15.71| |GPQA (0-shot) |12.75| |MuSR (0-shot) |12.21| |MMLU-PRO (5-shot) |37.42|
recoilme-gemma-2-Ataraxy-9B-v0.1-Q6_K-GGUF
MT5-Gen2_gemma-3-12B-Q6_K-GGUF
MT-Gen9-gemma-2-9B-Q6_K-GGUF
MT6-Gen2_gemma-3-12B-Q6_K-GGUF
MT3-Gen2_gemma-3-12B-Q6_K-GGUF
MT-Gen3_gemma-3-12B-Q6_K-GGUF
R1-Gemma-3-4B-multimodal-test-Q6_K-GGUF
MT2-Gen2_gemma-3-12B-Q6_K-GGUF
zelk12/MT2-Gen2gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT2-Gen2gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT8-Gen2_gemma-3-12B-Q6_K-GGUF
thinkygemma-4b-Q6_K-GGUF
inek-gemma-3-12b-pt-Q6_K-GGUF
internlm2_5-7b-chat-Q6_K-GGUF
MT-gemma-2-9B-Q6_K-GGUF
recoilme-gemma-2-9B-v0.3-Q6_K-GGUF
gemma-3-12b-deepseek-r1-v1-merged-16bit-Q6_K-GGUF
MT2-Gen2-gemma-2-9B-Q6_K-GGUF
MT4-gemma-3-12B-Q6_K-GGUF
MT-Gen2-gemma-3-12B-Q6_K-GGUF
MT-Gen10-gemma-2-9B
MT-Gen4-gemma-2-9B-Q6_K-GGUF
MT1-gemma-3-12B-Q6_K-GGUF
zelk12/MT1-gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT1-gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT1-Gen3_gemma-3-12B-Q6_K-GGUF
MT-Gen4_gemma-3-12B_flatten
MT5-Gen3_gemma-3-12B-Q6_K-GGUF
zelk12/MT5-Gen3gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT5-Gen3gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT3-Gen5-gemma-2-9B-Q6_K-GGUF
26_05_2025_Test_LazyMergekit_gemma-3-12B-Q6_K-GGUF
zelk12/26052025TestLazyMergekitgemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/26052025TestLazyMergekitgemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
gemma-teacher-v213-Q6_K-GGUF
gemma-v213-Q6_K-GGUF
MT4-Gen3_gemma-3-12B-Q6_K-GGUF
zelk12/MT4-Gen3gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT4-Gen3gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT1-Gen2-gemma-2-9B-Q6_K-GGUF
MT1-Gen3-gemma-2-9B-Q6_K-GGUF
text_in_number_converter
MT3-Gen8-gemma-2-9B-Q6_K-GGUF
MT-Merge8-gemma-2-9B-Q6_K-GGUF
MT3-Gen9-gemma-2-9B-Q6_K-GGUF
MT3-Gen10-gemma-2-9B-Q6_K-GGUF
MT7-Gen2_gemma-3-12B-Q6_K-GGUF
gemma-teacher-v0-Q6_K-GGUF
t2c-gemma3-4b-it-Q6_K-GGUF
MT4-Gen3-gemma-2-9B-Q6_K-GGUF
MT3-Gen8-U-gemma-2-MTg7RAv0.1t0.25-9B
MT4-Gen2_gemma-3-12B-Q6_K-GGUF
gemma-v0-Q6_K-GGUF
gemma-3-12b-finetuneFullMerged-Q6_K-GGUF
MT-Gen4_gemma-3-12B
MT1-gemma-2-9B-Q6_K-GGUF
MT3-Gen1-gemma-2-9B-Q6_K-GGUF
MT4-Gen2-IF-gemma-2-MT5MT1-9B
MT2-Gen4-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen4-IMU-gemma-2-9B zelk12/MT2-Gen4-MAGBMM-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |31.86| |IFEval (0-Shot) |78.96| |BBH (3-Shot) |43.78| |MATH Lvl 5 (4-Shot)| 8.31| |GPQA (0-shot) |12.75| |MuSR (0-shot) |10.47| |MMLU-PRO (5-shot) |36.90|
MTMMe-Merge-gemma-2-9B_NuSLERP-Q6_K-GGUF
zelk12/MTMMe-Merge-gemma-2-9BNuSLERP-Q6K-GGUF This model was converted to GGUF format from `zelk12/MTMMe-Merge-gemma-2-9BNuSLERP` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT1-Gen11-gemma-2-9B-Q6_K-GGUF
MT2-Gen11-gemma-2-9B-Q6_K-GGUF
MT1-Gen3-gemma-2-9B
recoilme-gemma-2-9B-v0.4-Q6_K-GGUF
MT4-gemma-2-9B-Q6_K-GGUF
MT2-Gen1-gemma-2-9B-Q6_K-GGUF
zelk12/MT2-Gen1-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT2-Gen1-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT2-Gen3-gemma-2-9B-Q6_K-GGUF
MT-Merge3-gemma-2-9B-Q6_K-GGUF
MT4-Gen4-gemma-2-9B-Q6_K-GGUF
MT1-Gen7-gemma-2-9B-Q6_K-GGUF
gemma-teacher1-v1-Q6_K-GGUF
MT-Gen1-gemma-3-12B
MT-Merge6-gemma-2-9B
MT2-Gen2_gemma-3-12B
MT-Merge2-gemma-2-9B-Q6_K-GGUF
zelk12/MT-Merge2-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT-Merge2-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT2-Gen3-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT2-Gen3-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen3-BMAMUG-gemma-2-9B zelk12/MT2-Gen3-IMM-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.97| |IFEval (0-Shot) |78.10| |BBH (3-Shot) |44.01| |MATH Lvl 5 (4-Shot)|13.29| |GPQA (0-shot) |12.86| |MuSR (0-shot) |12.05| |MMLU-PRO (5-shot) |37.49|
Gemma2-9B-it-psy10k-mental_health-Q6_K-GGUF
MT1-Gen5-gemma-2-9B-Q6_K-GGUF
MTMaMe-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF
gemma-3-12b-it-v3-merged-Q6_K-GGUF
MT-gemma-3-12B-Q6_K-GGUF
zelk12/MT-gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT-gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MTM-Merge1-gemma-2-9B
MT3-Gen2-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen2-GMM-gemma-2-9B zelk12/MT3-Gen2-IMUBMA-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.96| |IFEval (0-Shot) |78.43| |BBH (3-Shot) |43.94| |MATH Lvl 5 (4-Shot)| 2.04| |GPQA (0-shot) |14.32| |MuSR (0-shot) |10.02| |MMLU-PRO (5-shot) |37.03|
MT-Gen3-IMMMUMA-gemma-2-9B
MTM-Merge-GMA-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MTM-Merge-MA-gemma-2-MTM4MTM5-9B zelk12/MTM-Merge-GP-gemma-2-MTM5MTM4-9B The following YAML configuration was used to produce this model:
MT-Gen8-gemma-2-9B-Q6_K-GGUF
zelk12/MT-Gen8-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT-Gen8-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MTMMe-Merge-gemma-2-9B-Q6_K-GGUF
MT-Gen11-gemma-2-9B-Q6_K-GGUF
zelk12/MT-Gen11-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT-Gen11-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT1-Gen13-gemma-2-9B-Q6_K-GGUF
Gemma-R1-12B-v3-Q6_K-GGUF
MT7-Gen3_gemma-3-12B-Q6_K-GGUF
zelk12/MT7-Gen3gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT7-Gen3gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
RAt0.25-gemma-2-RI-9B-Q6_K-GGUF
MT1-Gen1-gemma-2-9B-Q6_K-GGUF
MT2-Gen1-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen1-MMMUMAG-gemma-2-9B zelk12/MT2-Gen1-IB-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.46| |IFEval (0-Shot) |78.56| |BBH (3-Shot) |44.14| |MATH Lvl 5 (4-Shot)|10.12| |GPQA (0-shot) |12.42| |MuSR (0-shot) |12.01| |MMLU-PRO (5-shot) |37.52|
MT3-Gen1-BB-gemma-2-Av4cRAv0.1-9B
MT4-Gen1-gemma-2-9B-Q6_K-GGUF
MT-Gen2-gemma-2-9B-Q6_K-GGUF
MT-Gen5-gemma-2-9B-Q6_K-GGUF
Rv0.4DMv1t0.25Tt0.25-gemma-2-9B-Q6_K-GGUF
MT-Gen6fix-C-gemma-2-ItARv0.5-9B
MT2-Gen9-W-gemma-2-MT2g8RAv0.1t0.25-9B
MT2-Gen9-NC-gemma-2-9B
MT-Gen9-gemma-2-9B
MT4-Gen2_gemma-3-12B
MT4-Gen2gemma-3-12B is a merge of the following models using LazyMergekit: zelk12/MT-Gen1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT1-gemma-3-12B IlyaGusev/saigagemma312b TheDrummer/Fallen-Gemma3-12B-v1
MT2-Gen2-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT2-Gen2-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen2-BGMAMU-gemma-2-9B zelk12/MT2-Gen2-IMM-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |33.47| |IFEval (0-Shot) |78.89| |BBH (3-Shot) |44.04| |MATH Lvl 5 (4-Shot)|14.80| |GPQA (0-shot) |12.86| |MuSR (0-shot) |12.58| |MMLU-PRO (5-shot) |37.65|
MT2-Gen11-gemma-2-9B
MT-Merge-gemma-2-9B-Q6_K-GGUF
MT1-Gen2-gemma-2-9B
MT-Gen6fix-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen6fix-UW-gemma-2-9B zelk12/MT-Gen6fix-CN-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Gen8-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen8-CU-gemma-2-9B zelk12/MT-Gen8-WN-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Merge9-gemma-2-9B
MT-gemma-3-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using soob3123/amoral-gemma3-12B-v2 as a base. The following models were included in the merge: IlyaGusev/saigagemma312b The following YAML configuration was used to produce this model:
recoilme-gemma-2-Gutenberg-Doppel-9B-v0.1-Q6_K-GGUF
recoilme-gemma-2-Ataraxy-9B-v0.1-t0.375-Q6_K-GGUF
MT1-GB-gemma-2-9B
MT2-MA-gemma-2-RPMHv0.1Rv0.3-9B
MT4-IMUGMA-gemma-2-9B
MT5-MAMU-gemma-2-9B
MT-Merge1-gemma-2-9B-Q6_K-GGUF
MT2-Gen2-BG-gemma-2-9B
GGUF Static: https://huggingface.co/models?other=basemodel:quantized:zelk12/MT2-Gen2-BG-gemma-2-9B This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B The following YAML configuration was used to produce this model:
MT-Gen3-IMM-gemma-2-9B
MT2-Gen3-IF-gemma-2-MT4g2S2-9B
GGUF Static: https://huggingface.co/mradermacher/MT2-Gen3-IF-gemma-2-MT4g2S2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Gemmaslerp2-9B zelk12/MT4-Gen2-gemma-2-9B The following YAML configuration was used to produce this model:
MT3-Gen3-BB-gemma-2-MT2RIv0.1-9B
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen3-BB-gemma-2-MT2RIv0.1-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/recoilme-gemma-2-Ifable-9B-v0.1 zelk12/MT2-gemma-2-9B The following YAML configuration was used to produce this model:
MT2-Gen4-MM-gemma-2-Rv0.4MTM-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge-gemma-2-9B recoilme/recoilme-gemma-2-9B-v0.4 The following YAML configuration was used to produce this model:
MT3-Gen4-GB-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen4-GP-gemma-2-MT5g3MT2g2-9B zelk12/MT3-Gen4-BB-gemma-2-MT2g2MT5g3-9B The following YAML configuration was used to produce this model:
MT3-Gen4-MUI-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen4-IF-gemma-2-Riv0.1RAv0.1-9B zelk12/MT3-Gen4-MU-gemma-2-Sv1IBT-9B The following YAML configuration was used to produce this model:
MT-Merge4-gemma-2-9B
MT1-Gen5-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen5-BMMIMU-gemma-2-9B zelk12/MT1-Gen5-MAG-gemma-2-9B The following YAML configuration was used to produce this model:
MT3-Gen5-GP-gemma-2-RAv0.1E-9B
MT3-Gen5-BMU-gemma-2-9B_v1
MT1-Gen6-C-gemma-2-MAItA-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: djuna/Gemma-2-gemmama-9b IlyaGusev/gemma-2-9b-it-abliterated The following YAML configuration was used to produce this model:
MT2-Gen7-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen7-CN-gemma-2-9B zelk12/MT2-Gen7-UW-gemma-2-9B The following YAML configuration was used to produce this model:
MTM-Merge1-gemma-2-9B-Q6_K-GGUF
MT3-Gen2_gemma-3-12B
MT3-Gen2gemma-3-12B is a merge of the following models using LazyMergekit: zelk12/MT-Gen1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT1-gemma-3-12B IlyaGusev/saigagemma312b TheDrummer/Fallen-Gemma3-12B-v1
recoilme-gemma-2-Ataraxy-9B-v0.1-t0.75-Q6_K-GGUF
recoilme-gemma-2-Ifable-9B-v0.1-Q6_K-GGUF
recoilme-gemma-2-psy10k-mental_healt-9B-v0.1-Q6_K-GGUF
MT-Gen1-gemma-2-9B-Q6_K-GGUF
MT1-Gen1-IF-gemma-2-MT1MT2-9B
MT5-Gen1-gemma-2-9B-Q6_K-GGUF
zelk12/MT5-Gen1-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT5-Gen1-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Gemma-2-TM-9B-Q6_K-GGUF
MT1-Gen2-BI-gemma-2-9B
MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge-gemma-2-9B zelk12/recoilme-gemma-2-Ifable-9B-v0.1 The following YAML configuration was used to produce this model:
MT3-Gen2-MA-gemma-2-MTMQv1-9B
MT4-Gen3-BMA-gemma-2-9B
MT5-Gen3-GP-gemma-2-RIv0.1MT4g2-9B
MT1-Gen4-MM-gemma-2-MTg2Av4d-9B
MT1-Gen4-gemma-2-9B-Q6_K-GGUF
MT5-Gen4-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen4-IMUBMA-gemma-2-9B zelk12/MT5-Gen4-MMG-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |33.77| |IFEval (0-Shot) |78.35| |BBH (3-Shot) |44.32| |MATH Lvl 5 (4-Shot)|17.07| |GPQA (0-shot) |13.76| |MuSR (0-shot) |11.35| |MMLU-PRO (5-shot) |37.74|
MT5-Gen4-gemma-2-9B-Q6_K-GGUF
MT2-Gen5-MAI-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen5-IF-gemma-2-S2MT4g2-9B zelk12/MT2-Gen5-MA-gemma-2-S2MT3g4-9B The following YAML configuration was used to produce this model:
MT2-Gen5-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen5-GBMMMU-gemma-2-9B zelk12/MT2-Gen5-MAI-gemma-2-9B The following YAML configuration was used to produce this model:
MT2-Gen5-gemma-2-9B-Q6_K-GGUF
T31122024S2-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Gemmaslerp2-9B zelk12/T31122024203920-gemma-2-9B The following YAML configuration was used to produce this model:
Test01012025155054_gemma-2
Gemma-Creative-9B-Base-Q6_K-GGUF
MT1-Gen7-UW-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen7-W-gemma-2-MTg6MTg6f-9B zelk12/MT1-Gen7-U-gemma-2-MTg6fMTg6-9B The following YAML configuration was used to produce this model:
MT2-Gen8-N-gemma-2-RAv0.1t0.25AR-9B
MT2-Gen9-U-gemma-2-MT2g8RAv0.1t0.25-9B
MT1-Gen10-gemma-2-9B-Q6_K-GGUF
28_05_2025_Test2_LazyMergekit_gemma-3-12B-Q6_K-GGUF
zelk12/28052025Test2LazyMergekitgemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/28052025Test2LazyMergekitgemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
30_05_2025_Test4_LazyMergekit_gemma-3-12B-Q6_K-GGUF
MT5-Gen2_gemma-3-12B
MT8-Gen2_gemma-3-12B
MT8-Gen2gemma-3-12B is a merge of the following models using LazyMergekit: zelk12/MT-Gen1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT1-gemma-3-12B IlyaGusev/saigagemma312b TheDrummer/Fallen-Gemma3-12B-v1
gemma-reducido-1layer-Q6_K-GGUF
MT2-Gen3_gemma-3-12B-Q6_K-GGUF
zelk12/MT2-Gen3gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT2-Gen3gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT3-Gen4-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen4-MAMM-gemma-2-9B zelk12/MT3-Gen4-GBMUI-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |34.49| |IFEval (0-Shot) |77.37| |BBH (3-Shot) |43.78| |MATH Lvl 5 (4-Shot)|20.47| |GPQA (0-shot) |12.98| |MuSR (0-shot) |14.72| |MMLU-PRO (5-shot) |37.64|
MT3-Gen10-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using zelk12/MT-Gen6fix-gemma-2-9B as a base. The following models were included in the merge: TheDrummer/Tiger-Gemma-9B-v3 IlyaGusev/gemma-2-9b-it-abliterated zelk12/MT1-Gen7-gemma-2-9B Sorawiz/Gemma-9B-Chat zelk12/MT-Merge6-gemma-2-9B The following YAML configuration was used to produce this model:
recoilme-gemma-2-Ataraxy-9B-v0.1-t0.75
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.4 lemon07r/Gemma-2-Ataraxy-v2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |28.42| |IFEval (0-Shot) |72.08| |BBH (3-Shot) |42.49| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) |13.31| |MuSR (0-shot) | 7.76| |MMLU-PRO (5-shot) |34.90|
MT-Merge2-MU-gemma-2-MTg2MT1g2-9B
MT3-Gen5-gemma-2-9B_v1
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen5-MAI-gemma-2-9Bv1 zelk12/MT3-Gen5-MMGBMU-gemma-2-9Bv1 The following YAML configuration was used to produce this model:
MTM-Merge-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MTM-Merge-MMMUBI-gemma-2-9B zelk12/MTM-Merge-GMA-gemma-2-9B The following YAML configuration was used to produce this model:
recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25-Q6_K-GGUF
recoilme-gemma-2-psy10k-mental_healt-9B-v0.1
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.4 ehristoforu/Gemma2-9B-it-psy10k-mentalhealth The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.18| |IFEval (0-Shot) |74.45| |BBH (3-Shot) |42.13| |MATH Lvl 5 (4-Shot)|16.47| |GPQA (0-shot) |12.53| |MuSR (0-shot) |12.18| |MMLU-PRO (5-shot) |35.34|
recoilme-gemma-2-Ataraxy-9B-v0.2-Q6_K-GGUF
MT2-MU-gemma-2-Rv0.4Rv0.2-9B
MT2-gemma-2-9B-Q6_K-GGUF
MT3-MA-gemma-2-RAt0.25v0.1Rv0.3-9B
MT3-gemma-2-9B-Q6_K-GGUF
MT5-MA-gemma-2-Av4cPMH-9B
MT-Merge1-gemma-2-9B
MT1-Gen2-IF-gemma-2-MT1Qv1-9B
MT1-Gen2-MMMU-gemma-2-9B
MT3-Gen2-gemma-2-9B-Q6_K-GGUF
MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B
GGUF Static: https://huggingface.co/mradermacher/MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-gemma-2-9B recoilme/recoilme-gemma-2-9B-v0.4 The following YAML configuration was used to produce this model:
MT5-Gen2-BB-gemma-2-MT1SLMI-9B
MT5-Gen2-MMGMAMU-gemma-2-9B
MT5-Gen2-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT5-Gen2-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen2-BI-gemma-2-9B zelk12/MT5-Gen2-MMGMAMU-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.60| |IFEval (0-Shot) |79.62| |BBH (3-Shot) |44.11| |MATH Lvl 5 (4-Shot)|10.35| |GPQA (0-shot) |13.53| |MuSR (0-shot) |10.44| |MMLU-PRO (5-shot) |37.55|
MT5-Gen2-gemma-2-9B-Q6_K-GGUF
MT-Merge2-MAMM-gemma-2-9B
MT-Merge2-MUB-gemma-2-9B
MT-Gen3-MU-gemma-2-Sv1IBTMTg2-9B
GGUF Static: https://huggingface.co/mradermacher/MT-Gen3-MU-gemma-2-Sv1IBTMTg2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: gmonsoon/SahabatAI-Lion-9B-TIES-v1 zelk12/MT-Gen2-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Gen3-MUMA-gemma-2-9B
MT-Gen3-gemma-2-9B-Q6_K-GGUF
MT1-Gen3-MM-gemma-2-MTg2S2-9B
GGUF Static: https://huggingface.co/mradermacher/MT1-Gen3-MM-gemma-2-MTg2S2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Gemmaslerp2-9B zelk12/MT-Gen2-gemma-2-9B The following YAML configuration was used to produce this model:
MT2-Gen3-MM-gemma-2-Rv0.4MTM-9B
MT2-Gen3-IMM-gemma-2-9B
MT3-Gen3-GMAMUB-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen3-GMAMUB-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen3-MUB-gemma-2-9B zelk12/MT3-Gen3-GMA-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Merge3-GMA-gemma-2-9B
MT-Gen4-MU-gemma-2-Sv1IBTS5-9B
GGUF Static: https://huggingface.co/mradermacher/MT-Gen4-MU-gemma-2-Sv1IBTS5-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/GemmaSlerp5-10B gmonsoon/SahabatAI-Lion-9B-TIES-v1 The following YAML configuration was used to produce this model:
MT-Gen4-BMM-gemma-2-9B
MT-Gen4-GIBMM-gemma-2-9B
MT-Gen4-gemma-2-9B
MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B
MT2-Gen4-IF-gemma-2-MT4g2S2-9B
MT2-Gen4-MU-gemma-2-S2N3N1532-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: nhyha/N3Ngemma-2-9b-it202410291532 allknowingroger/Gemmaslerp2-9B The following YAML configuration was used to produce this model:
MT3-Gen4-MM-gemma-2-MT2g2Sv1IBT-9B
MT-Gen5-MUMMIG-gemma-2-9B
MT-Gen5-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen5-MUMMIG-gemma-2-9B zelk12/MT-Gen5-BMA-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Gen5-MA-gemma-2-S2S4-9B
MT3-Gen5-IF-gemma-2-Av4dMT2-9B
MT3-Gen5-IF-gemma-2-Av4dMT2-9B_v1
MT3-Gen5-BB-gemma-2-RAv0.1MT2-9B_v1
MT3-Gen5-GP-gemma-2-RAv0.1E-9B_v1
MT3-Gen5-gemma-2-9B_v1-Q6_K-GGUF
zelk12/MT3-Gen5-gemma-2-9Bv1-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT3-Gen5-gemma-2-9Bv1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT4-Gen5-IF-gemma-2-S2MT3g4-9B
MT-Merge5-MA-gemma-2-MT4g5MTg5-9B
MT1-Max-Merge_02012025163610-IF-gemma-2-MT1g2MTM2MU-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge2-MU-gemma-2-MTg2MT1g2-9B zelk12/MT1-Gen2-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Gen6-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen6-NC-gemma-2-9B zelk12/MT-Gen6-UW-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Gen6-U-gemma-2-Tv3ItA-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Tiger-Gemma-9B-v3 IlyaGusev/gemma-2-9b-it-abliterated The following YAML configuration was used to produce this model:
MT1-Gen6-W-gemma-2-ItATv3-9B
MT1-Gen6-UW-gemma-2-9B
MT3-Gen6-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen6-UC-gemma-2-9B zelk12/MT3-Gen6-WN-gemma-2-9B The following YAML configuration was used to produce this model:
MT3-Gen7-gemma-2-9B-Q6_K-GGUF
MT-Merge7-gemma-2-9B-Q6_K-GGUF
zelk12/MT-Merge7-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT-Merge7-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT-Gen10-gemma-2-9B-Q6_K-GGUF
recoilme-gemma-2-Ifable-9B-v0.1
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: ifable/gemma-2-Ifable-9B recoilme/recoilme-gemma-2-9B-v0.4 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.05| |IFEval (0-Shot) |79.44| |BBH (3-Shot) |43.39| |MATH Lvl 5 (4-Shot)| 7.93| |GPQA (0-shot) |13.53| |MuSR (0-shot) |11.10| |MMLU-PRO (5-shot) |36.93|
MT5-gemma-2-9B-Q6_K-GGUF
MT-Gen1-GP-gemma-2-MT2MT1-9B
MT-Gen1-MAMM-gemma-2-9B
MT1-Gen1-BGMMMU-gemma-2-9B
MT1-Gen1-gemma-2-9B
MT2-Gen1-MU-gemma-2-Rv0.4Av4c-9B
MT3-Gen1-MMMUMAG-gemma-2-9B
MT4-Gen1-IF-gemma-2-MT5MT1-9B
MT4-Gen1-MU-gemma-2-Rv0.4MT1-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.4 zelk12/MT1-gemma-2-9B The following YAML configuration was used to produce this model:
MT5-Gen1-MUMA-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen1-MA-gemma-2-Av4cMT1-9B zelk12/MT5-Gen1-MU-gemma-2-GMT1-9B The following YAML configuration was used to produce this model:
MT5-Gen1-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen1-MMGMUMA-gemma-2-9B zelk12/MT5-Gen1-BI-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |31.90| |IFEval (0-Shot) |78.31| |BBH (3-Shot) |44.18| |MATH Lvl 5 (4-Shot)| 6.87| |GPQA (0-shot) |12.98| |MuSR (0-shot) |11.61| |MMLU-PRO (5-shot) |37.43|
Gemma-2-IAv2-9B
MT-Merge1-MM-gemma-2-MT4g1MT2g1-9B
MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B
MT-Gen2-MUB-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT-Gen2-MUB-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen2-MU-gemma-2-MT1RAv0.1t0.25-9B zelk12/MT-Gen2-BB-gemma-2-MTMMT2-9B The following YAML configuration was used to produce this model:
MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B
MT3-Gen2-BMA-gemma-2-9B
MT4-Gen2-BB-gemma-2-MTMMT1-9B
MT2-Gen3-MA-gemma-2-N3N1532MTM-9B
MT3-Gen3-MA-gemma-2-S4MT2-9B
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen3-MA-gemma-2-S4MT2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-gemma-2-9B allknowingroger/Gemmaslerp4-10B The following YAML configuration was used to produce this model:
MT3-Gen3-MU-gemma-2-Rv0.3MTM2MU-9B
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen3-MU-gemma-2-Rv0.3MTM2MU-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge2-MU-gemma-2-MTg2MT1g2-9B recoilme/recoilme-gemma-2-9B-v0.3 The following YAML configuration was used to produce this model:
MT3-Gen3-gemma-2-9B-Q6_K-GGUF
MT4-Gen3-MA-gemma-2-N3N1532MT4g2-9B
MT4-Gen3-IMM-gemma-2-9B
MT5-Gen3-BB-gemma-2-MT4g2MT2-9B
MT5-Gen3-MU-gemma-2-Rv0.3MT4g2-9B
MT5-Gen3-gemma-2-9B-Q6_K-GGUF
MT-Merge3-IF-gemma-2-MTg3MT3g3-9B
MT-Merge3-gemma-2-9B
gemma-2-S2MTM-9B-Q6_K-GGUF
MT-Gen4-IF-gemma-2-MT4g2MTg2-9B
MT2-Gen4-gemma-2-9B-Q6_K-GGUF
MT3-Gen4-gemma-2-9B-Q6_K-GGUF
MT4-Gen4-MM-gemma-2-MT4g2MTM-9B
MT5-Gen4-IF-gemma-2-MT4g2MT5g3-9B
MT5-Gen4-MU-gemma-2-MT4g2RIv0.1-9B
MT5-Gen4-MMG-gemma-2-9B
MT-Merge4-gemma-2-9B-Q6_K-GGUF
MT3-Gen5-MMGBMU-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen5-BMU-gemma-2-9B zelk12/MT3-Gen5-MMG-gemma-2-9B The following YAML configuration was used to produce this model:
MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B
MT5-Max-Merge_02012025163610-MU-gemma-2-MTM4MT1g2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge4-gemma-2-9B zelk12/MT1-Gen2-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Gen6-gemma-2-9B-Q6_K-GGUF
MT2-Gen6-C-gemma-2-ARMT3g2-9B
Gemma-Chat-9B-Q6_K-GGUF
MT3-Gen6-gemma-2-9B-Q6_K-GGUF
MT-Gen7-U-gemma-2-MTg6fCB-9B
MT-Gen7-W-gemma-2-MTg6CB-9B
MT1-Gen7-N-gemma-2-Rv0.2MA-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.2 djuna/Gemma-2-gemmama-9b The following YAML configuration was used to produce this model:
MT1-Gen7-C-gemma-2-MAMTg6f-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen6fix-gemma-2-9B djuna/Gemma-2-gemmama-9b The following YAML configuration was used to produce this model:
MT-Merge7-U-gemma-2-MT1g7MT2g7-9B
MT-Gen8-C-gemma-2-CBItA-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Sorawiz/Gemma-9B-Chat IlyaGusev/gemma-2-9b-it-abliterated The following YAML configuration was used to produce this model:
MT1-Gen8-C-gemma-2-MAMTg6-9B
MT1-Gen8-UW-gemma-2-9B
MT2-Gen8-gemma-2-9B-Q6_K-GGUF
MT3-Gen8-NC-gemma-2-9B
MT-Merge8-U-gemma-2-MTg8MT1g8-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen8-gemma-2-9B zelk12/MT-Gen8-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Gen9-W-gemma-2-MTg6MTg6f-9B
MT2-Gen9-C-gemma-2-ARMT3g6-9B
MT2-Gen9-WU-gemma-2-9B
MT2-Gen9-gemma-2-9B
MT3-Gen9-WU-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen9-U-gemma-2-MT2g6MTg7-9B zelk12/MT3-Gen9-W-gemma-2-MTg7MT2g6-9B The following YAML configuration was used to produce this model:
MT3-Gen9-CN-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen9-N-gemma-2-MTMaMe02012025163610Gv1It-9B zelk12/MT3-Gen9-C-gemma-2-Gv1ItMT2g6-9B The following YAML configuration was used to produce this model:
MTM-Merge1-W-gemma-2-MTMe6MTMe8-9B
MTM-Merge1-NU-gemma-2-9B
gemma3-vikhr-4b-Q6_K-GGUF
zelk12/gemma3-vikhr-4b-Q6K-GGUF This model was converted to GGUF format from `Vikhrmodels/gemma3-vikhr-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT2-gemma-3-12B-Q6_K-GGUF
zelk12/MT2-gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT2-gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT-Gen1-gemma-3-12B-Q6_K-GGUF
zelk12/MT-Gen1-gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT-Gen1-gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
26_05_2025_Test1_LazyMergekit_gemma-3-12B-Q6_K-GGUF
zelk12/26052025Test1LazyMergekitgemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/26052025Test1LazyMergekitgemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT1-Gen2_gemma-3-12B-Q6_K-GGUF
MT7-Gen2_gemma-3-12B
gemma-v1-Q6_K-GGUF
MT6-Gen3_gemma-3-12B-Q6_K-GGUF
zelk12/MT6-Gen3gemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT6-Gen3gemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT-Merge-gemma-2-9B
MT2-Gen10-gemma-2-9B
recoilme-gemma-2-Ataraxy-9B-v0.1
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: lemon07r/Gemma-2-Ataraxy-v2-9B recoilme/recoilme-gemma-2-9B-v0.4 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.33| |IFEval (0-Shot) |76.49| |BBH (3-Shot) |43.71| |MATH Lvl 5 (4-Shot)| 1.28| |GPQA (0-shot) |13.31| |MuSR (0-shot) |10.30| |MMLU-PRO (5-shot) |36.90|
MT1-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-GB-gemma-2-9B zelk12/MT1-IMMMU-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |33.37| |IFEval (0-Shot) |79.47| |BBH (3-Shot) |44.16| |MATH Lvl 5 (4-Shot)|13.37| |GPQA (0-shot) |12.75| |MuSR (0-shot) |13.16| |MMLU-PRO (5-shot) |37.31|
MT-Merge2-gemma-2-9B
MT4-Gen5-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-Gen5-GMA-gemma-2-9B zelk12/MT4-Gen5-IBMUMM-gemma-2-9B The following YAML configuration was used to produce this model:
MT5-Gen5-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen5-GI-gemma-2-9B zelk12/MT5-Gen5-MABMUMM-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Merge5-gemma-2-9B
MT-Gen7-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen7-CN-gemma-2-9B zelk12/MT-Gen7-UW-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Gen7-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen7-UW-gemma-2-9B zelk12/MT1-Gen7-CN-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Merge8-gemma-2-9B
MT3-Gen9-gemma-2-9B
MT1-Gen11-gemma-2-9B
MT3-Gen11-gemma-2-9B
MT-Gen13-gemma-2-9B
MT1-Gen13-gemma-2-9B
MT1-Gen3-MUI-gemma-2-9B
recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.4 lemon07r/Gemma-2-Ataraxy-v2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |33.06| |IFEval (0-Shot) |77.07| |BBH (3-Shot) |43.85| |MATH Lvl 5 (4-Shot)|14.12| |GPQA (0-shot) |12.42| |MuSR (0-shot) |13.13| |MMLU-PRO (5-shot) |37.78|
recoilme-gemma-2-Gutenberg-Doppel-9B-v0.1
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.4 nbeerbower/Gemma2-Gutenberg-Doppel-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |31.46| |IFEval (0-Shot) |76.15| |BBH (3-Shot) |43.94| |MATH Lvl 5 (4-Shot)| 6.34| |GPQA (0-shot) |12.19| |MuSR (0-shot) |13.31| |MMLU-PRO (5-shot) |36.84|
recoilme-gemma-2-Ataraxy-9B-v0.2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.75 zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.10| |IFEval (0-Shot) |76.00| |BBH (3-Shot) |43.63| |MATH Lvl 5 (4-Shot)| 1.13| |GPQA (0-shot) |13.09| |MuSR (0-shot) | 9.84| |MMLU-PRO (5-shot) |36.92|
MT1-IM-gemma-2-9B
MT1-MMU-gemma-2-9B
MT2-IF-gemma-2-RIv0.1RAt0.25v0.1-9B
MT2-BB-gemma-2-RGDv0.1RAt0.25v0.1-9B
MT2-MM-gemma-2-Rv0.4RAt0.25v0.1-9B
MT3-GMU-gemma-2-9B
MT2-MMMAGMU-gemma-2-9B
MT2-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-MMMAGMU-gemma-2-9B zelk12/MT2-IB-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |33.03| |IFEval (0-Shot) |78.86| |BBH (3-Shot) |44.17| |MATH Lvl 5 (4-Shot)|13.22| |GPQA (0-shot) |12.98| |MuSR (0-shot) |11.54| |MMLU-PRO (5-shot) |37.43|
MT3-BMM-gemma-2-9B
MT4-GP-gemma-2-RIv0.1RGDv0.1-9B
MT4-IMU-gemma-2-9B
MT5-BB-gemma-2-Av4cRAv0.1-9B
MT5-GP-gemma-2-RIv0.1RAv0.1-9B
MT5-IG-gemma-2-9B
MT5-IGMAMU-gemma-2-9B
MT3-Gen1-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen1-MMMUMAG-gemma-2-9B zelk12/MT3-Gen1-BI-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |31.05| |IFEval (0-Shot) |78.38| |BBH (3-Shot) |44.12| |MATH Lvl 5 (4-Shot)| 3.25| |GPQA (0-shot) |12.86| |MuSR (0-shot) |10.76| |MMLU-PRO (5-shot) |36.96|
MT-Gen2-MU-gemma-2-MT1RAv0.1t0.25-9B
GGUF Static: https://huggingface.co/mradermacher/MT-Gen2-MU-gemma-2-MT1RAv0.1t0.25-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25 zelk12/MT1-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B
GGUF Static: https://huggingface.co/mradermacher/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-Gen1-gemma-2-9B zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25 The following YAML configuration was used to produce this model:
MT-Gen2-GI-gemma-2-9B
MT-Gen2-gemma-2-9B
MT2-Gen2-IF-gemma-2-MT5MTM-9B
MT2-Gen2-BB-gemma-2-MTMMT5-9B
MT3-Gen2-GMM-gemma-2-9B
MT-Gen3-GP-gemma-2-S5MTg2-9B
MT1-Gen3-MA-gemma-2-S5Av4d-9B
MT1-Gen3-GP-gemma-2-S4S5-9B
GGUF Static: https://huggingface.co/mradermacher/MT1-Gen3-GP-gemma-2-S4S5-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Gemmaslerp4-10B allknowingroger/GemmaSlerp5-10B The following YAML configuration was used to produce this model:
MT1-Gen3-BG-gemma-2-9B
MT1-Gen3-MMMAMUI-gemma-2-9B
MT2-Gen3-BB-gemma-2-MTMS2-9B
MT2-Gen3-BMA-gemma-2-9B
MT2-Gen3-MUG-gemma-2-9B
MT3-Gen3-GP-gemma-2-S4RGDv0.1-9B
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen3-GP-gemma-2-S4RGDv0.1-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Gemmaslerp4-10B zelk12/recoilme-gemma-2-Gutenberg-Doppel-9B-v0.1 The following YAML configuration was used to produce this model:
MT3-Gen3-IMM-gemma-2-9B
MT4-Gen3-MU-gemma-2-S2MT4g2-9B
MT4-Gen3-MM-gemma-2-Rv0.4MT4g2-9B
MT5-Gen3-MAG-gemma-2-9B
MT1-Gen4-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen4-MUMA-gemma-2-9B zelk12/MT1-Gen4-GIBMM-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |31.51| |IFEval (0-Shot) |79.41| |BBH (3-Shot) |43.15| |MATH Lvl 5 (4-Shot)| 4.91| |GPQA (0-shot) |12.98| |MuSR (0-shot) |12.09| |MMLU-PRO (5-shot) |36.51|
MT2-Gen4-IMU-gemma-2-9B
MT2-Gen4-MAG-gemma-2-9B
MT2-Gen4-BMM-gemma-2-9B
MT3-Gen4-BB-gemma-2-MT2g2MT5g3-9B
MT3-Gen4-GP-gemma-2-MT5g3MT2g2-9B
MT4-Gen4-BB-gemma-2-MT4g2MT3g2-9B
MT4-Gen4-BI-gemma-2-9B
MT-Merge4-MA-gemma-2-MT3g4MTg4-9B
MT-Merge4-BMU-gemma-2-9B
MT-Merge4-GI-gemma-2-9B
MT-Merge4-BMUGI-gemma-2-9B
MT-Gen5-IF-gemma-2-MT5g4MT4g2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-Gen2-gemma-2-9B zelk12/MT5-Gen4-gemma-2-9B The following YAML configuration was used to produce this model:
MT3-Gen5-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen5-MAI-gemma-2-9B zelk12/MT3-Gen5-MMGBMU-gemma-2-9B The following YAML configuration was used to produce this model:
MT4-Gen5-IB-gemma-2-9B
MT4-Gen5-MUMM-gemma-2-9B
MT5-Gen5-IF-gemma-2-EMTg2-9B
MT5-Gen5-GP-gemma-2-Av4dMTg2-9B
MT5-Gen5-GI-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen5-GP-gemma-2-Av4dMTg2-9B zelk12/MT5-Gen5-IF-gemma-2-EMTg2-9B The following YAML configuration was used to produce this model:
MT-Merge5-IF-gemma-2-MTg5MT1g5-9B
MTM-Merge-MA-gemma-2-MTM4MTM5-9B
MTM-Merge-MMMU-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MTM-Merge-MM-gemma-2-MTM4MTM2-9B zelk12/MTM-Merge-MU-gemma-2-MTM4MTM3-9B The following YAML configuration was used to produce this model:
MT-Max-Merge_02012025163610-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Max-Merge02012025163610-MAMM-gemma-2-9B zelk12/MT-Max-Merge02012025163610-MUGBI-gemma-2-9B The following YAML configuration was used to produce this model:
Rv0.4MT4g2-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.4 zelk12/MT4-Gen2-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Max-Merge_02012025163610-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Max-Merge02012025163610-MAB-gemma-2-9B zelk12/MT1-Max-Merge02012025163610-GMMMUI-gemma-2-9B The following YAML configuration was used to produce this model:
MT2-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B
MT5-Max-Merge_02012025163610-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Max-Merge02012025163610-GI-gemma-2-9B zelk12/MT5-Max-Merge02012025163610-BMAMUMM-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Gen12-gemma-2-9B
MT1-gemma-3-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using IlyaGusev/saigagemma312b as a base. The following models were included in the merge: TheDrummer/Fallen-Gemma3-12B-v1 The following YAML configuration was used to produce this model:
MT2-gemma-3-12B
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using TheDrummer/Fallen-Gemma3-12B-v1 as a base. The following models were included in the merge: soob3123/amoral-gemma3-12B-v2-qat The following YAML configuration was used to produce this model:
MT3-Gen3_gemma-3-12B
MT3-Gen3gemma-3-12B is a merge of the following models using LazyMergekit: IlyaGusev/saigagemma312b zelk12/MT1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT-Gen1-gemma-3-12B zelk12/MT-gemma-3-12B
MT4-Gen3_gemma-3-12B
MT4-Gen3gemma-3-12B is a merge of the following models using LazyMergekit: IlyaGusev/saigagemma312b zelk12/MT1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT-Gen1-gemma-3-12B zelk12/MT-gemma-3-12B
MT6-Gen3_gemma-3-12B
MT6-Gen3gemma-3-12B is a merge of the following models using LazyMergekit: IlyaGusev/saigagemma312b zelk12/MT1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT-Gen1-gemma-3-12B zelk12/MT-gemma-3-12B
MT4-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-BMM-gemma-2-9B zelk12/MT4-IMUGMA-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |33.16| |IFEval (0-Shot) |77.62| |BBH (3-Shot) |43.55| |MATH Lvl 5 (4-Shot)|15.63| |GPQA (0-shot) |11.74| |MuSR (0-shot) |13.00| |MMLU-PRO (5-shot) |37.40|
MT5-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-IGMAMU-gemma-2-9B zelk12/MT5-MMB-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.44| |IFEval (0-Shot) |80.48| |BBH (3-Shot) |44.27| |MATH Lvl 5 (4-Shot)| 8.61| |GPQA (0-shot) |12.42| |MuSR (0-shot) |11.48| |MMLU-PRO (5-shot) |37.41|
MT-Merge-MU-gemma-2-MT1MT4-9B
MT-Merge-GMAMUI-gemma-2-9B
MT1-Gen1-BB-gemma-2-MT3MT2-9B
MT1-Gen1-GP-gemma-2-MT2MT1-9B
MT1-Gen1-MU-gemma-2-Av4AMT1-9B
MT2-Gen1-GP-gemma-2-RIv0.1MT5-9B
MT2-Gen1-MMMUMAG-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen1-MMMU-gemma-2-9B zelk12/MT2-Gen1-MAG-gemma-2-9B The following YAML configuration was used to produce this model:
MT3-Gen1-MM-gemma-2-Av4cRAv0.1-9B
MT3-Gen1-BI-gemma-2-9B
MT3-Gen1-MMMU-gemma-2-9B
MT4-Gen1-MM-gemma-2-Rv0.4MT1-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-gemma-2-9B recoilme/recoilme-gemma-2-9B-v0.4 The following YAML configuration was used to produce this model:
MT4-Gen1-IB-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-Gen1-IF-gemma-2-MT5MT1-9B zelk12/MT4-Gen1-BB-gemma-2-MT5MT1-9B The following YAML configuration was used to produce this model:
MT5-Gen1-MU-gemma-2-GMT1-9B
MT5-Gen1-MMG-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen1-GP-gemma-2-RAv0.1MT1-9B zelk12/MT5-Gen1-MM-gemma-2-Av4cMT1-9B The following YAML configuration was used to produce this model:
Gemma-2-MT1MT1g1-9B
Gemma-2-Tv3Tv1-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Tiger-Gemma-9B-v3 TheDrummer/Tiger-Gemma-9B-v1 The following YAML configuration was used to produce this model:
Gemma-2-TM-9B
MT-Merge1-IF-gemma-2-MT1g1MT4g1-9B
MT5-Gen1-MMGBI-gemma-2-9B
MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B
MT4-Gen2-gemma-2-9B-Q6_K-GGUF
zelk12/MT4-Gen2-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT4-Gen2-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT4-Gen3-GP-gemma-2-MT3g2MT4g2-9B
MT5-Gen3-IF-gemma-2-MT4g2MTM2MU-9B
MT5-Gen3-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT5-Gen3-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT5-Gen3-MAGBMU-gemma-2-9B zelk12/MT5-Gen3-IMM-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.80| |IFEval (0-Shot) |78.25| |BBH (3-Shot) |43.89| |MATH Lvl 5 (4-Shot)|11.56| |GPQA (0-shot) |13.53| |MuSR (0-shot) |12.08| |MMLU-PRO (5-shot) |37.50|
MT-Merge3-GP-gemma-2-MT5g3MT1g3-9B
MT-Merge3-MMB-gemma-2-9B
gemma-2-S2MTM-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge-gemma-2-9B allknowingroger/Gemmaslerp2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |31.15| |IFEval (0-Shot) |78.23| |BBH (3-Shot) |43.12| |MATH Lvl 5 (4-Shot)| 4.00| |GPQA (0-shot) |12.75| |MuSR (0-shot) |12.16| |MMLU-PRO (5-shot) |36.63|
MT1-Gen4-IF-gemma-2-MTM2MUMTg2-9B
MT1-Gen4-GP-gemma-2-S4S5-9B
MT1-Gen4-MUMA-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B zelk12/MT1-Gen4-MA-gemma-2-S5S4-9B The following YAML configuration was used to produce this model:
MT1-Gen4-BMM-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen4-MM-gemma-2-MTg2Av4d-9B zelk12/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B The following YAML configuration was used to produce this model:
MT1-Gen4-GIBMM-gemma-2-9B
MT4-Gen4-MA-gemma-2-S2MT4g2-9B
MT4-Gen4-MAMM-gemma-2-9B
MT4-Gen4-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-Gen4-GMUBI-gemma-2-9B zelk12/MT4-Gen4-MAMM-gemma-2-9B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |32.09| |IFEval (0-Shot) |78.74| |BBH (3-Shot) |43.48| |MATH Lvl 5 (4-Shot)| 7.70| |GPQA (0-shot) |13.65| |MuSR (0-shot) |12.04| |MMLU-PRO (5-shot) |36.93|
MT5-Gen4-GP-gemma-2-MT2g2MT4g2-9B
MT2-Gen5-IF-gemma-2-S2MT4g2-9B
MT2-Gen5-GB-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Gen5-BB-gemma-2-MT3g2MTM-9B zelk12/MT2-Gen5-GP-gemma-2-S2MT3g2-9B The following YAML configuration was used to produce this model:
MT-Merge5-MMB-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge5-MM-gemma-2-MT3g5MTg5-9B zelk12/MT-Merge5-BB-gemma-2-MT5g5MTg5-9B The following YAML configuration was used to produce this model:
Rv0.4DMv1t0.25-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: recoilme/recoilme-gemma-2-9B-v0.4 sam-paech/Darkest-muse-v1 The following YAML configuration was used to produce this model:
Rv0.4DMv1t0.25Tt0.25-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: TheDrummer/Tiger-Gemma-9B-v3 zelk12/Rv0.4DMv1t0.25-gemma-2-9B The following YAML configuration was used to produce this model:
T31122024203920-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/Rv0.4DMv1t0.25Tt0.25-gemma-2-9B zelk12/MT3-Gen4-gemma-2-9B The following YAML configuration was used to produce this model:
MTM-Merge-gemma-2-9B-Q6_K-GGUF
Test01012025155054t0.5_gemma-2
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen4-gemma-2-9B The following YAML configuration was used to produce this model:
MT-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF
MT2-Max-Merge_02012025163610-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT2-Max-Merge02012025163610-MMMA-gemma-2-9B zelk12/MT2-Max-Merge02012025163610-MUBGI-gemma-2-9B The following YAML configuration was used to produce this model:
MT3-Max-Merge_02012025163610-MA-gemma-2-MTM4MT5g4-9B
MT3-Max-Merge_02012025163610-GP-gemma-2-MTM4MT1g2-9B
MT3-Max-Merge_02012025163610-MUI-gemma-2-9B
MT4-Max-Merge_02012025163610-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT4-Max-Merge02012025163610-MAGMUMM-gemma-2-9B zelk12/MT4-Max-Merge02012025163610-IB-gemma-2-9B The following YAML configuration was used to produce this model:
MT4-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF
MT5-Max-Merge_02012025163610-BMAMUMM-gemma-2-9B
MTMaMe-Merge_02012025163610-MA-gemma-2-MTMaMe02012025163610MT1MaMe02012025163610-9B
MTMaMe-Merge_02012025163610-MUG-gemma-2-9B
MTMaMe-Merge_02012025163610-MMBMUG-gemma-2-9B
MT-Gen6fix-gemma-2-9B-Q6_K-GGUF
zelk12/MT-Gen6fix-gemma-2-9B-Q6K-GGUF This model was converted to GGUF format from `zelk12/MT-Gen6fix-gemma-2-9B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT2-Gen6-W-gemma-2-Tv1AR-9B
MT3-Gen6-W-gemma-2-ItAS2-9B
MT3-Gen6-UC-gemma-2-9B
MT3-Gen6-WN-gemma-2-9B
MT3-Gen7-WC-gemma-2-9B
MT-Gen8-WN-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Gen8-N-gemma-2-MT2g7CB-9B zelk12/MT-Gen8-W-gemma-2-MTg6FItA-9B The following YAML configuration was used to produce this model:
MT1-Gen8-U-gemma-2-MTg6fMtg6-9B
MT1-Gen8-W-gemma-2-MTg6MTg6f-9B
MT1-Gen8-CN-gemma-2-9B
MT1-Gen8-gemma-2-9B-Q6_K-GGUF
MT2-Gen8-W-gemma-2-Rv1RAv0.1t0.25-9B
MT2-Gen8-C-gemma-2-ARMT3g6-9B
MT2-Gen8-NU-gemma-2-9B
MT-Merge8-N-gemma-2-MT2g8MTg8-9B
MT-Gen9-CU-gemma-2-9B
MT1-Gen9-WU-gemma-2-9B
MT2-Gen9-N-gemma-2-RAv0.1t0.25AR-9B
MT2-Gen9-gemma-2-9B-Q6_K-GGUF
MT-Merge9-CN-gemma-2-9B
MT-Merge9-gemma-2-9B-Q6_K-GGUF
28_05_2025_Test2_LazyMergekit_gemma-3-12B
28052025Test2LazyMergekitgemma-3-12B is a merge of the following models using LazyMergekit: zelk12/MT-Gen1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT1-gemma-3-12B
29_05_2025_Test3_LazyMergekit_gemma-3-12B-Q6_K-GGUF
zelk12/29052025Test3LazyMergekitgemma-3-12B-Q6K-GGUF This model was converted to GGUF format from `zelk12/29052025Test3LazyMergekitgemma-3-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
gemma-teacher3-v1-Q6_K-GGUF
gemma-teacher2-v1-Q6_K-GGUF
MT1-Gen3_gemma-3-12B
MT3-Gen3_gemma-3-12B-Q6_K-GGUF
MT7-Gen3_gemma-3-12B
MT3-Gen8-gemma-2-9B
MT-Gen3-gemma-2-9B
MT3-Gen3-GMA-gemma-2-9B
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen3-GMA-gemma-2-9B-GGUF This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT3-Gen3-GP-gemma-2-S4RGDv0.1-9B zelk12/MT3-Gen3-MA-gemma-2-S4MT2-9B The following YAML configuration was used to produce this model:
MT-Gen5-BB-gemma-2-MTM4MT5g4-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT-Merge4-gemma-2-9B zelk12/MT5-Gen4-gemma-2-9B The following YAML configuration was used to produce this model:
MT1-Gen6-gemma-2-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MT1-Gen6-NC-gemma-2-9B zelk12/MT1-Gen6-UW-gemma-2-9B The following YAML configuration was used to produce this model:
MT2-Gen8-gemma-2-9B
MTMMe-Merge-gemma-2-9B
MTMMe-Merge-gemma-2-9B_NuSLERP_w0.7_0.3
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: zelk12/MTM-Merge-gemma-2-9B zelk12/MTM-Merge1-gemma-2-9B The following YAML configuration was used to produce this model:
MTMMe-Merge-gemma-2-9B_NuSLERP_w0.7_0.3-Q6_K-GGUF
zelk12/MTMMe-Merge-gemma-2-9BNuSLERPw0.70.3-Q6K-GGUF This model was converted to GGUF format from `zelk12/MTMMe-Merge-gemma-2-9BNuSLERPw0.70.3` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
MT-Gen2-gemma-3-12B
MT-Gen2-gemma-3-12B is a merge of the following models using LazyMergekit: zelk12/MT-Gen1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT1-gemma-3-12B IlyaGusev/saigagemma312b TheDrummer/Fallen-Gemma3-12B-v1
MT1-Gen2_gemma-3-12B
MT1-Gen2gemma-3-12B is a merge of the following models using LazyMergekit: zelk12/MT-Gen1-gemma-3-12B soob3123/amoral-gemma3-12B-v2 zelk12/MT1-gemma-3-12B IlyaGusev/saigagemma312b TheDrummer/Fallen-Gemma3-12B-v1