allknowingroger

131 models • 68 total models in database
Sort by:

MultiverseEx26-7B-slerp

MultiverseEx26-7B-slerp is a merge of the following models using LazyMergekit: yam-peleg/Experiment26-7B MTSAIR/multiversemodel

NaNK
license:apache-2.0
7,908
1

Qwen2.5-slerp-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: v000000/Qwen2.5-Lumen-14B Qwen/Qwen2.5-14B-Instruct The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
38
0

Strangecoven-7B-slerp

Strangecoven-7B-slerp is a merge of the following models using LazyMergekit: Gille/StrangeMerges16-7B-slerp raidhon/coven7b128korpoalpha Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.16| |IFEval (0-Shot) |37.46| |BBH (3-Shot) |34.83| |MATH Lvl 5 (4-Shot)| 6.72| |GPQA (0-shot) | 5.26| |MuSR (0-shot) |10.42| |MMLU-PRO (5-shot) |26.27|

NaNK
license:apache-2.0
19
1

Qwenslerp4-14B

NaNK
8
1

HomerSlerp4-7B

A library for natural language processing tasks using the Apache 2.0 license and the transformers library.

NaNK
license:apache-2.0
8
0

Gemma2Slerp1-27B

Apache 2.0 licensed library for transformers.

NaNK
license:apache-2.0
8
0

Gemma2Slerp3-27B

Apache 2.0 licensed library for transformers.

NaNK
license:apache-2.0
8
0

QwenStock2-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using CultriX/SeQwence-14Bv1 as a base. The following models were included in the merge: allknowingroger/Qwenslerp2-14B CultriX/Qwen2.5-14B-MegaMerge-pt2 CultriX/Qwen2.5-14B-MergeStock CultriX/Qwestion-14B allknowingroger/Qwen2.5-slerp-14B allknowingroger/Qwenslerp3-14B CultriX/Qwen2.5-14B-Wernicke The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |36.93| |IFEval (0-Shot) |55.63| |BBH (3-Shot) |50.60| |MATH Lvl 5 (4-Shot)|29.91| |GPQA (0-shot) |17.23| |MuSR (0-shot) |19.28| |MMLU-PRO (5-shot) |48.95|

NaNK
license:apache-2.0
7
1

Ph3della5-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-DPO-v1.1 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |29.92| |IFEval (0-Shot) |47.99| |BBH (3-Shot) |48.41| |MATH Lvl 5 (4-Shot)|14.35| |GPQA (0-shot) |12.30| |MuSR (0-shot) |14.36| |MMLU-PRO (5-shot) |42.08|

NaNK
license:apache-2.0
7
0

Gemmaslerp2-9B

NaNK
license:apache-2.0
6
3

GemmaSlerp5-10B

Apache 2.0 licensed library for transformers.

NaNK
license:apache-2.0
6
2

Gemma2Slerp2-27B

Apache 2.0 licensed library for transformers.

NaNK
license:apache-2.0
5
1

DelexaMaths-12B-MoE

NaNK
license:apache-2.0
5
0

HomerSlerp1-7B

Library for transformers with an Apache 2.0 license.

NaNK
license:apache-2.0
4
2

Gemma2Slerp2-2.6B

Base model: Lil-R/2 PRYMMAL-ECE-2B-SLERP-V1, Lil-R/2 PRYMMAL-ECE-2B-SLERP-V2.

NaNK
4
2

ROGERphi-7B-slerp

ROGERphi-7B-slerp is a merge of the following models using LazyMergekit: rhysjones/phi-2-orange-v2 mobiuslabs/aanaphi-v0.1 Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.59| |IFEval (0-Shot) |38.61| |BBH (3-Shot) |32.82| |MATH Lvl 5 (4-Shot)| 6.65| |GPQA (0-shot) | 5.15| |MuSR (0-shot) |17.53| |MMLU-PRO (5-shot) |22.81|

NaNK
license:apache-2.0
4
0

NexusMistral2-7B-slerp

NaNK
license:apache-2.0
4
0

NeuralWestSeverus-7B-slerp

NeuralWestSeverus-7B-slerp is a merge of the following models using LazyMergekit: Kukedlc/Neural4gsm8k PetroGPT/WestSeverus-7B-DPO Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.60| |IFEval (0-Shot) |41.36| |BBH (3-Shot) |33.41| |MATH Lvl 5 (4-Shot)| 6.87| |GPQA (0-shot) | 2.80| |MuSR (0-shot) |15.41| |MMLU-PRO (5-shot) |23.75|

NaNK
license:apache-2.0
4
0

Ph3task2-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: failspy/Phi-3-medium-4k-instruct-abliterated-v3 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |28.25| |IFEval (0-Shot) |47.13| |BBH (3-Shot) |44.08| |MATH Lvl 5 (4-Shot)|12.46| |GPQA (0-shot) |10.74| |MuSR (0-shot) |16.62| |MMLU-PRO (5-shot) |38.44|

NaNK
license:apache-2.0
4
0

Gemmaslerp-9B

NaNK
license:apache-2.0
4
0

Qwenslerp3-7B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: sethuiyer/Qwen2.5-7B-Anvita fblgit/cybertron-v4-qw7B-MGS The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
4
0

Marco-01-slerp1-7B

Apache 2.0 licensed library using transformers.

NaNK
license:apache-2.0
4
0

Qwenslerp2-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: rombodawg/Rombos-LLM-V2.6-Qwen-14b v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
3
1

Qwen2.5-7B-task4

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: KPEP/krx-qwen-2.5-7b-v1.4.2 Tsunami-th/Tsunami-0.5x-7B-Instruct The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
3
1

Qwen2.5-7B-task8

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B-Instruct as a base. The following models were included in the merge: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2 Orion-zhen/Qwen2.5-7B-Instruct-Uncensored The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
3
1

QwenSlerp12-7B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Qwenslerp3-7B jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |29.54| |IFEval (0-Shot) |50.76| |BBH (3-Shot) |36.41| |MATH Lvl 5 (4-Shot)|26.74| |GPQA (0-shot) | 8.72| |MuSR (0-shot) |16.13| |MMLU-PRO (5-shot) |38.45|

NaNK
license:apache-2.0
3
1

QwenSlerp6-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: CultriX/SeQwence-14Bv1 allknowingroger/Qwenslerp2-14B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |39.02| |IFEval (0-Shot) |68.67| |BBH (3-Shot) |47.59| |MATH Lvl 5 (4-Shot)|34.14| |GPQA (0-shot) |16.44| |MuSR (0-shot) |18.32| |MMLU-PRO (5-shot) |48.95|

NaNK
license:apache-2.0
3
1

LimyQstar-7B-slerp

Merge and mergekit.

NaNK
license:apache-2.0
3
0

FrankenRoger-10B-passthrough

NaNK
license:apache-2.0
3
0

Neuraljack-12B-MoE

NaNK
license:apache-2.0
3
0

MixTaoTruthful-13B-slerp

This model is licensed under Apache 2.0 and is tagged for merging.

NaNK
license:apache-2.0
3
0

MultiMash12-13B-slerp

NaNK
license:apache-2.0
3
0

Codestral-19B-pass

NaNK
license:apache-2.0
3
0

Qwenslerp2-7B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: fblgit/cybertron-v4-qw7B-MGS Tsunami-th/Tsunami-0.5x-7B-Instruct The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.42| |IFEval (0-Shot) |52.94| |BBH (3-Shot) |37.44| |MATH Lvl 5 (4-Shot)|31.87| |GPQA (0-shot) | 8.39| |MuSR (0-shot) |12.82| |MMLU-PRO (5-shot) |39.06|

NaNK
license:apache-2.0
3
0

LlamaSlerp1-8B

NaNK
llama
3
0

MixTAO-19B-pass

License: Apache 2.0. Tags: merge.

NaNK
license:apache-2.0
2
2

Gemmaslerp4-10B

NaNK
license:apache-2.0
2
2

QwenStock1-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: CultriX/SeQwence-14Bv1 CultriX/Qwen2.5-14B-Wernicke CultriX/Qwen2.5-14B-MegaMerge-pt2 CultriX/Qwestion-14B allknowingroger/Qwen2.5-slerp-14B Qwen/Qwen2.5-14B-Instruct allknowingroger/Qwenslerp3-14B allknowingroger/Qwenslerp2-14B CultriX/Qwen2.5-14B-MergeStock The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |36.76| |IFEval (0-Shot) |56.34| |BBH (3-Shot) |50.08| |MATH Lvl 5 (4-Shot)|29.38| |GPQA (0-shot) |16.89| |MuSR (0-shot) |18.79| |MMLU-PRO (5-shot) |49.09|

NaNK
license:apache-2.0
2
2

Mistralmash2-7B-s

License: Apache 2.0. Tags: merge.

NaNK
license:apache-2.0
2
1

Qwenslerp3-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Qwen2.5-slerp-14B rombodawg/Rombos-LLM-V2.6-Qwen-14b The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
2
1

CalmeRys-78B-Orpo-F32

NaNK
license:apache-2.0
2
1

Qwen-modelstock-15B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using allknowingroger/Qwenslerp2-14B as a base. The following models were included in the merge: allknowingroger/Qwenslerp3-14B allknowingroger/Qwen2.5-slerp-14B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
2
1

Qwen2.5-7B-task2

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B-Instruct as a base. The following models were included in the merge: fblgit/cybertron-v4-qw7B-MGS Tsunami-th/Tsunami-0.5x-7B-Instruct The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
2
1

HomerSlerp6-7B

NaNK
license:apache-2.0
2
1

QwenSlerp5-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: CultriX/Qwestion-14B CultriX/SeQwence-14Bv1 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |38.94| |IFEval (0-Shot) |71.19| |BBH (3-Shot) |47.39| |MATH Lvl 5 (4-Shot)|33.16| |GPQA (0-shot) |15.32| |MuSR (0-shot) |17.81| |MMLU-PRO (5-shot) |48.78|

NaNK
2
1

QwenStock3-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using CultriX/SeQwence-14Bv1 as a base. The following models were included in the merge: CultriX/Qwen2.5-14B-MergeStock allknowingroger/QwenStock1-14B CultriX/Qwen2.5-14B-Wernicke allknowingroger/QwenStock2-14B allknowingroger/Qwenslerp2-14B CultriX/Qwen2.5-14B-MegaMerge-pt2 CultriX/Qwestion-14B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |36.97| |IFEval (0-Shot) |56.15| |BBH (3-Shot) |50.58| |MATH Lvl 5 (4-Shot)|29.68| |GPQA (0-shot) |17.11| |MuSR (0-shot) |19.11| |MMLU-PRO (5-shot) |49.20|

NaNK
license:apache-2.0
2
1

limyClown-7B-slerp

limyClown-7B-slerp is a merge of the following models using LazyMergekit: liminerity/M7-7b CorticalStack/shadow-clown-7B-slerp

NaNK
license:apache-2.0
2
0

MultiMerge-7B-slerp

MultiMerge-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/MultiverseEx26-7B-slerp allknowingroger/limyClown-7B-slerp

NaNK
license:apache-2.0
2
0

StarlingMaths-12B-MoE

NaNK
license:apache-2.0
2
0

Lamma3merge-15B-MoE

NaNK
mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep
2
0

WestlakeMaziyar-7B-slerp

WestlakeMaziyar-7B-slerp is a merge of the following models using LazyMergekit: macadeliccc/WestLake-7B-v2-laser-truthy-dpo MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1 Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |22.04| |IFEval (0-Shot) |48.38| |BBH (3-Shot) |33.34| |MATH Lvl 5 (4-Shot)| 5.82| |GPQA (0-shot) | 7.16| |MuSR (0-shot) |14.49| |MMLU-PRO (5-shot) |23.08|

NaNK
license:apache-2.0
2
0

MultiMash7-12B-slerp

This model is licensed under Apache 2.0 and is tagged for merging.

NaNK
license:apache-2.0
2
0

Mistral3mash1-7B-slerp

NaNK
license:apache-2.0
2
0

MultiMash10-13B-slerp

This model is licensed under Apache 2.0 and is tagged for merging.

NaNK
license:apache-2.0
2
0

MultiMash11-13B-slerp

This model is licensed under Apache 2.0 and is tagged for merging.

NaNK
license:apache-2.0
2
0

Ph3task1-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-4k-DPO failspy/Phi-3-medium-4k-instruct-abliterated-v3 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.08| |IFEval (0-Shot) |46.95| |BBH (3-Shot) |47.93| |MATH Lvl 5 (4-Shot)|13.90| |GPQA (0-shot) |13.42| |MuSR (0-shot) |16.81| |MMLU-PRO (5-shot) |41.49|

NaNK
license:apache-2.0
2
0

llama3Yi-40B

NaNK
llama
2
0

Yislerp2-34B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: 01-ai/Yi-1.5-34B-Chat CombinHorizon/YiSM-blossom5.1-34B-SLERP The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.10| |IFEval (0-Shot) |39.93| |BBH (3-Shot) |47.20| |MATH Lvl 5 (4-Shot)|21.00| |GPQA (0-shot) |15.21| |MuSR (0-shot) |15.85| |MMLU-PRO (5-shot) |41.38|

NaNK
llama
2
0

Yibuddy-35B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: 01-ai/Yi-1.5-34B-Chat BattlescarZa/medibuddy-llm-34B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |27.70| |IFEval (0-Shot) |42.35| |BBH (3-Shot) |42.81| |MATH Lvl 5 (4-Shot)|12.24| |GPQA (0-shot) |14.09| |MuSR (0-shot) |15.97| |MMLU-PRO (5-shot) |38.77|

NaNK
llama
2
0

Yislerp4-34B

NaNK
llama
2
0

Qwen2.5-7B-task3

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: Tsunami-th/Tsunami-0.5x-7B-Instruct jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.1 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
2
0

QwenTask1-32B

NaNK
license:apache-2.0
2
0

QwenTask2-32B

NaNK
license:apache-2.0
2
0

SmolVLM-Base-vqav2

license:apache-2.0
2
0

Heart_Stolen-8B-task

NaNK
llama
1
3

Chocolatine-24B

Apache 2.0 licensed library for transformers.

NaNK
license:apache-2.0
1
2

Neurallaymons-7B-slerp

NaNK
license:apache-2.0
1
1

ANIMA-biodesign-7B-slerp

NaNK
license:apache-2.0
1
1

LadybirdPercival-7B-slerp

NaNK
license:apache-2.0
1
1

StarlingMaxLimmy2-7B-slerp

NaNK
license:apache-2.0
1
1

JupiterINEX12-12B-MoE

NaNK
license:apache-2.0
1
1

YamMaths-7B-slerp

YamMaths-7B-slerp is a merge of the following models using LazyMergekit: automerger/YamshadowExperiment28-7B Kukedlc/NeuralMaths-Experiment-7b Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.38| |IFEval (0-Shot) |41.48| |BBH (3-Shot) |32.13| |MATH Lvl 5 (4-Shot)| 7.48| |GPQA (0-shot) | 4.03| |MuSR (0-shot) |13.46| |MMLU-PRO (5-shot) |23.68|

NaNK
license:apache-2.0
1
1

Mistralmash1-7B-s

License: Apache 2.0. Tags: merge.

NaNK
license:apache-2.0
1
1

Ph3della-14B

NaNK
license:apache-2.0
1
1

HomerSlerp2-7B

A library for natural language processing tasks using the Apache 2.0 license and the Transformers library.

NaNK
license:apache-2.0
1
1

Marco-01-slerp5-7B

NaNK
license:apache-2.0
1
1

Rogerlee-2.5-7B-slerp

NaNK
license:apache-2.0
1
0

PrometheusLaser-7B-slerp

NaNK
license:apache-2.0
1
0

MistralMerge-7B-stock

NaNK
license:apache-2.0
1
0

StarlingDolphin-7B-slerp

NaNK
license:apache-2.0
1
0

RogerMerge-7B-slerp

RogerMerge-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/MultiverseEx26-7B-slerp allknowingroger/PercivalMelodias-7B-slerp

NaNK
license:apache-2.0
1
0

Multimerge-12B-MoE

NaNK
license:apache-2.0
1
0

Multimerge-Neurallaymons-12B-MoE

NaNK
license:apache-2.0
1
0

Neuralcoven-7B-slerp

Neuralcoven-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/Neurallaymons-7B-slerp raidhon/coven7b128korpoalpha Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.16| |IFEval (0-Shot) |38.59| |BBH (3-Shot) |33.80| |MATH Lvl 5 (4-Shot)| 6.65| |GPQA (0-shot) | 4.70| |MuSR (0-shot) |11.76| |MMLU-PRO (5-shot) |25.49|

NaNK
license:apache-2.0
1
0

Neuralmultiverse-7B-slerp

Neuralmultiverse-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/MultiverseEx26-7B-slerp allknowingroger/NeuralCeptrix-7B-slerp

NaNK
license:apache-2.0
1
0

MultiCalm-7B-slerp

Merge and mergekit.

NaNK
license:apache-2.0
1
0

MultiMash-12B-slerp

Merge tags include merge and mergekit.

NaNK
license:apache-2.0
1
0

MultiMash2-12B-slerp

This model is licensed under Apache 2.0 and is tagged for merging.

NaNK
license:apache-2.0
1
0

MultiMash6-12B-slerp

This model is licensed under Apache 2.0 and is tagged for merging.

NaNK
license:apache-2.0
1
0

Meme-7B-slerp

Tags: merge, mergekit

NaNK
license:apache-2.0
1
0

MultiMash8-13B-slerp

This model is licensed under the Apache 2.0 license and is tagged for merging.

NaNK
license:apache-2.0
1
0

MultiMash9-13B-slerp

MultiMash9-13B-slerp is a merge of the following models using LazyMergekit: zhengr/MixTAO-7Bx2-MoE-v8.1 allknowingroger/Calmex26merge-12B-MoE Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.53| |IFEval (0-Shot) |41.88| |BBH (3-Shot) |32.55| |MATH Lvl 5 (4-Shot)| 7.18| |GPQA (0-shot) | 4.03| |MuSR (0-shot) |14.21| |MMLU-PRO (5-shot) |23.33|

NaNK
license:apache-2.0
1
0

Ph3unsloth-3B-slerp

Ph3unsloth-3B-slerp is a merge of the following models using LazyMergekit: anandanand84/otcjsonphi3 udayakumar-cyb/meetingmodel

NaNK
license:apache-2.0
1
0

MistralPhi3-11B

License: Apache 2.0, Tags: merge.

NaNK
license:apache-2.0
1
0

Phi3mash1-17B-pass

Phi3-19B-pass is a merge of the following models using LazyMergekit: Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.35| |IFEval (0-Shot) |18.84| |BBH (3-Shot) |45.25| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 9.28| |MuSR (0-shot) |14.84| |MMLU-PRO (5-shot) |39.88|

NaNK
license:apache-2.0
1
0

Ph3merge-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-4k-DPO failspy/Phi-3-medium-4k-instruct-abliterated-v3 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |23.53| |IFEval (0-Shot) |27.01| |BBH (3-Shot) |48.88| |MATH Lvl 5 (4-Shot)| 0.15| |GPQA (0-shot) |11.74| |MuSR (0-shot) |13.28| |MMLU-PRO (5-shot) |40.12|

NaNK
license:apache-2.0
1
0

Ph3task3-14B

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-4k-DPO The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.21| |IFEval (0-Shot) |49.62| |BBH (3-Shot) |48.00| |MATH Lvl 5 (4-Shot)|14.58| |GPQA (0-shot) |12.19| |MuSR (0-shot) |14.95| |MMLU-PRO (5-shot) |41.90|

NaNK
license:apache-2.0
1
0

Jallabi-40B

NaNK
llama
1
0

orca_mini_v7_72b-F32

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: pankajmathur/orcaminiv772b The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
1
0

Gemmaslerp3-9B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: sam-paech/Quill-v1 sam-paech/Delirium-v1 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
1
0

TheBeagle-v2beta-32B-Slerp

NaNK
1
0

Qwenslerp3-17B

NaNK
license:apache-2.0
1
0

Marco-01-slerp2-7B

NaNK
license:apache-2.0
1
0

Gemma2Slerp4-27B

Apache 2.0 licensed library using transformers.

NaNK
license:apache-2.0
1
0

Ph3della3-14B

NaNK
license:apache-2.0
0
2

Qwen-modelstock2-15B

NaNK
license:apache-2.0
0
2

JupiterMerge-7B-slerp

NaNK
license:apache-2.0
0
1

DolphinChat-7B-slerp

NaNK
license:apache-2.0
0
1

Mistralchat-7B-slerp

NaNK
license:apache-2.0
0
1

NeuralDolphin-7B-slerp

NaNK
license:apache-2.0
0
1

StarlingMaxLimmy-7B-slerp

NaNK
license:apache-2.0
0
1

DelexaMultiverse-12B-MoE

NaNK
license:apache-2.0
0
1

RogerWizard-12B-MoE

NaNK
license:apache-2.0
0
1

CeptrixBeagle-12B-MoE

NaNK
license:apache-2.0
0
1

Qwen2.5pass-50B

NaNK
license:apache-2.0
0
1

Llama-3.1-Nemotron-70B-Instruct-HF-F32

NaNK
llama
0
1

Qwenslerp2-14B-F32

NaNK
license:apache-2.0
0
1

Qwen2.5-32B-F32

NaNK
license:apache-2.0
0
1

Qwenslerp1-7B

NaNK
license:apache-2.0
0
1

Qwen2.5-7B-task5

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: Tsunami-th/Tsunami-0.5x-7B-Instruct jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.2 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
0
1

Qwen2.5-7B-task6

NaNK
license:apache-2.0
0
1

Qwen2.5-7B-task7

This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES macadeliccc/Samantha-Qwen-2-7B The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
0
1

QwenSlerp6-7B

NaNK
license:apache-2.0
0
1

QwenSlerp7-7B

NaNK
license:apache-2.0
0
1

QwenSlerp8-7B

NaNK
license:apache-2.0
0
1

HomerSlerp3-7B

Apache 2.0 licensed library using transformers.

NaNK
license:apache-2.0
0
1

Marco-01-slerp4-7B

NaNK
license:apache-2.0
0
1

Marco-01-slerp6-7B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Qwen2.5-7B-task2 AIDC-AI/Marco-o1 The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
0
1

Marco-01-slerp7-7B

NaNK
license:apache-2.0
0
1

HomerSlerp5-7B

NaNK
license:apache-2.0
0
1

DeepHermes-3-Llama-3-slerp-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: hotmailuser/LlamaStock-8B NousResearch/DeepHermes-3-Llama-3-8B-Preview The following YAML configuration was used to produce this model:

NaNK
llama
0
1