allknowingroger
MultiverseEx26-7B-slerp
MultiverseEx26-7B-slerp is a merge of the following models using LazyMergekit: yam-peleg/Experiment26-7B MTSAIR/multiversemodel
Qwen2.5-slerp-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: v000000/Qwen2.5-Lumen-14B Qwen/Qwen2.5-14B-Instruct The following YAML configuration was used to produce this model:
Strangecoven-7B-slerp
Strangecoven-7B-slerp is a merge of the following models using LazyMergekit: Gille/StrangeMerges16-7B-slerp raidhon/coven7b128korpoalpha Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.16| |IFEval (0-Shot) |37.46| |BBH (3-Shot) |34.83| |MATH Lvl 5 (4-Shot)| 6.72| |GPQA (0-shot) | 5.26| |MuSR (0-shot) |10.42| |MMLU-PRO (5-shot) |26.27|
Qwenslerp4-14B
HomerSlerp4-7B
A library for natural language processing tasks using the Apache 2.0 license and the transformers library.
Gemma2Slerp1-27B
Apache 2.0 licensed library for transformers.
Gemma2Slerp3-27B
Apache 2.0 licensed library for transformers.
QwenStock2-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using CultriX/SeQwence-14Bv1 as a base. The following models were included in the merge: allknowingroger/Qwenslerp2-14B CultriX/Qwen2.5-14B-MegaMerge-pt2 CultriX/Qwen2.5-14B-MergeStock CultriX/Qwestion-14B allknowingroger/Qwen2.5-slerp-14B allknowingroger/Qwenslerp3-14B CultriX/Qwen2.5-14B-Wernicke The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |36.93| |IFEval (0-Shot) |55.63| |BBH (3-Shot) |50.60| |MATH Lvl 5 (4-Shot)|29.91| |GPQA (0-shot) |17.23| |MuSR (0-shot) |19.28| |MMLU-PRO (5-shot) |48.95|
Ph3della5-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-DPO-v1.1 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |29.92| |IFEval (0-Shot) |47.99| |BBH (3-Shot) |48.41| |MATH Lvl 5 (4-Shot)|14.35| |GPQA (0-shot) |12.30| |MuSR (0-shot) |14.36| |MMLU-PRO (5-shot) |42.08|
Gemmaslerp2-9B
GemmaSlerp5-10B
Apache 2.0 licensed library for transformers.
Gemma2Slerp2-27B
Apache 2.0 licensed library for transformers.
DelexaMaths-12B-MoE
HomerSlerp1-7B
Library for transformers with an Apache 2.0 license.
Gemma2Slerp2-2.6B
Base model: Lil-R/2 PRYMMAL-ECE-2B-SLERP-V1, Lil-R/2 PRYMMAL-ECE-2B-SLERP-V2.
ROGERphi-7B-slerp
ROGERphi-7B-slerp is a merge of the following models using LazyMergekit: rhysjones/phi-2-orange-v2 mobiuslabs/aanaphi-v0.1 Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.59| |IFEval (0-Shot) |38.61| |BBH (3-Shot) |32.82| |MATH Lvl 5 (4-Shot)| 6.65| |GPQA (0-shot) | 5.15| |MuSR (0-shot) |17.53| |MMLU-PRO (5-shot) |22.81|
NexusMistral2-7B-slerp
NeuralWestSeverus-7B-slerp
NeuralWestSeverus-7B-slerp is a merge of the following models using LazyMergekit: Kukedlc/Neural4gsm8k PetroGPT/WestSeverus-7B-DPO Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.60| |IFEval (0-Shot) |41.36| |BBH (3-Shot) |33.41| |MATH Lvl 5 (4-Shot)| 6.87| |GPQA (0-shot) | 2.80| |MuSR (0-shot) |15.41| |MMLU-PRO (5-shot) |23.75|
Ph3task2-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: failspy/Phi-3-medium-4k-instruct-abliterated-v3 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |28.25| |IFEval (0-Shot) |47.13| |BBH (3-Shot) |44.08| |MATH Lvl 5 (4-Shot)|12.46| |GPQA (0-shot) |10.74| |MuSR (0-shot) |16.62| |MMLU-PRO (5-shot) |38.44|
Gemmaslerp-9B
Qwenslerp3-7B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: sethuiyer/Qwen2.5-7B-Anvita fblgit/cybertron-v4-qw7B-MGS The following YAML configuration was used to produce this model:
Marco-01-slerp1-7B
Apache 2.0 licensed library using transformers.
Qwenslerp2-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: rombodawg/Rombos-LLM-V2.6-Qwen-14b v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno The following YAML configuration was used to produce this model:
Qwen2.5-7B-task4
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: KPEP/krx-qwen-2.5-7b-v1.4.2 Tsunami-th/Tsunami-0.5x-7B-Instruct The following YAML configuration was used to produce this model:
Qwen2.5-7B-task8
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B-Instruct as a base. The following models were included in the merge: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2 Orion-zhen/Qwen2.5-7B-Instruct-Uncensored The following YAML configuration was used to produce this model:
QwenSlerp12-7B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Qwenslerp3-7B jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |29.54| |IFEval (0-Shot) |50.76| |BBH (3-Shot) |36.41| |MATH Lvl 5 (4-Shot)|26.74| |GPQA (0-shot) | 8.72| |MuSR (0-shot) |16.13| |MMLU-PRO (5-shot) |38.45|
QwenSlerp6-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: CultriX/SeQwence-14Bv1 allknowingroger/Qwenslerp2-14B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |39.02| |IFEval (0-Shot) |68.67| |BBH (3-Shot) |47.59| |MATH Lvl 5 (4-Shot)|34.14| |GPQA (0-shot) |16.44| |MuSR (0-shot) |18.32| |MMLU-PRO (5-shot) |48.95|
LimyQstar-7B-slerp
Merge and mergekit.
FrankenRoger-10B-passthrough
Neuraljack-12B-MoE
MixTaoTruthful-13B-slerp
This model is licensed under Apache 2.0 and is tagged for merging.
MultiMash12-13B-slerp
Codestral-19B-pass
Qwenslerp2-7B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: fblgit/cybertron-v4-qw7B-MGS Tsunami-th/Tsunami-0.5x-7B-Instruct The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.42| |IFEval (0-Shot) |52.94| |BBH (3-Shot) |37.44| |MATH Lvl 5 (4-Shot)|31.87| |GPQA (0-shot) | 8.39| |MuSR (0-shot) |12.82| |MMLU-PRO (5-shot) |39.06|
LlamaSlerp1-8B
MixTAO-19B-pass
License: Apache 2.0. Tags: merge.
Gemmaslerp4-10B
QwenStock1-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: CultriX/SeQwence-14Bv1 CultriX/Qwen2.5-14B-Wernicke CultriX/Qwen2.5-14B-MegaMerge-pt2 CultriX/Qwestion-14B allknowingroger/Qwen2.5-slerp-14B Qwen/Qwen2.5-14B-Instruct allknowingroger/Qwenslerp3-14B allknowingroger/Qwenslerp2-14B CultriX/Qwen2.5-14B-MergeStock The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |36.76| |IFEval (0-Shot) |56.34| |BBH (3-Shot) |50.08| |MATH Lvl 5 (4-Shot)|29.38| |GPQA (0-shot) |16.89| |MuSR (0-shot) |18.79| |MMLU-PRO (5-shot) |49.09|
Mistralmash2-7B-s
License: Apache 2.0. Tags: merge.
Qwenslerp3-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Qwen2.5-slerp-14B rombodawg/Rombos-LLM-V2.6-Qwen-14b The following YAML configuration was used to produce this model:
CalmeRys-78B-Orpo-F32
Qwen-modelstock-15B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using allknowingroger/Qwenslerp2-14B as a base. The following models were included in the merge: allknowingroger/Qwenslerp3-14B allknowingroger/Qwen2.5-slerp-14B The following YAML configuration was used to produce this model:
Qwen2.5-7B-task2
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B-Instruct as a base. The following models were included in the merge: fblgit/cybertron-v4-qw7B-MGS Tsunami-th/Tsunami-0.5x-7B-Instruct The following YAML configuration was used to produce this model:
HomerSlerp6-7B
QwenSlerp5-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: CultriX/Qwestion-14B CultriX/SeQwence-14Bv1 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |38.94| |IFEval (0-Shot) |71.19| |BBH (3-Shot) |47.39| |MATH Lvl 5 (4-Shot)|33.16| |GPQA (0-shot) |15.32| |MuSR (0-shot) |17.81| |MMLU-PRO (5-shot) |48.78|
QwenStock3-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using CultriX/SeQwence-14Bv1 as a base. The following models were included in the merge: CultriX/Qwen2.5-14B-MergeStock allknowingroger/QwenStock1-14B CultriX/Qwen2.5-14B-Wernicke allknowingroger/QwenStock2-14B allknowingroger/Qwenslerp2-14B CultriX/Qwen2.5-14B-MegaMerge-pt2 CultriX/Qwestion-14B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |36.97| |IFEval (0-Shot) |56.15| |BBH (3-Shot) |50.58| |MATH Lvl 5 (4-Shot)|29.68| |GPQA (0-shot) |17.11| |MuSR (0-shot) |19.11| |MMLU-PRO (5-shot) |49.20|
limyClown-7B-slerp
limyClown-7B-slerp is a merge of the following models using LazyMergekit: liminerity/M7-7b CorticalStack/shadow-clown-7B-slerp
MultiMerge-7B-slerp
MultiMerge-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/MultiverseEx26-7B-slerp allknowingroger/limyClown-7B-slerp
StarlingMaths-12B-MoE
Lamma3merge-15B-MoE
WestlakeMaziyar-7B-slerp
WestlakeMaziyar-7B-slerp is a merge of the following models using LazyMergekit: macadeliccc/WestLake-7B-v2-laser-truthy-dpo MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1 Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |22.04| |IFEval (0-Shot) |48.38| |BBH (3-Shot) |33.34| |MATH Lvl 5 (4-Shot)| 5.82| |GPQA (0-shot) | 7.16| |MuSR (0-shot) |14.49| |MMLU-PRO (5-shot) |23.08|
MultiMash7-12B-slerp
This model is licensed under Apache 2.0 and is tagged for merging.
Mistral3mash1-7B-slerp
MultiMash10-13B-slerp
This model is licensed under Apache 2.0 and is tagged for merging.
MultiMash11-13B-slerp
This model is licensed under Apache 2.0 and is tagged for merging.
Ph3task1-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-4k-DPO failspy/Phi-3-medium-4k-instruct-abliterated-v3 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.08| |IFEval (0-Shot) |46.95| |BBH (3-Shot) |47.93| |MATH Lvl 5 (4-Shot)|13.90| |GPQA (0-shot) |13.42| |MuSR (0-shot) |16.81| |MMLU-PRO (5-shot) |41.49|
llama3Yi-40B
Yislerp2-34B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: 01-ai/Yi-1.5-34B-Chat CombinHorizon/YiSM-blossom5.1-34B-SLERP The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.10| |IFEval (0-Shot) |39.93| |BBH (3-Shot) |47.20| |MATH Lvl 5 (4-Shot)|21.00| |GPQA (0-shot) |15.21| |MuSR (0-shot) |15.85| |MMLU-PRO (5-shot) |41.38|
Yibuddy-35B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: 01-ai/Yi-1.5-34B-Chat BattlescarZa/medibuddy-llm-34B The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |27.70| |IFEval (0-Shot) |42.35| |BBH (3-Shot) |42.81| |MATH Lvl 5 (4-Shot)|12.24| |GPQA (0-shot) |14.09| |MuSR (0-shot) |15.97| |MMLU-PRO (5-shot) |38.77|
Yislerp4-34B
Qwen2.5-7B-task3
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: Tsunami-th/Tsunami-0.5x-7B-Instruct jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.1 The following YAML configuration was used to produce this model:
QwenTask1-32B
QwenTask2-32B
SmolVLM-Base-vqav2
Heart_Stolen-8B-task
Chocolatine-24B
Apache 2.0 licensed library for transformers.
Neurallaymons-7B-slerp
ANIMA-biodesign-7B-slerp
LadybirdPercival-7B-slerp
StarlingMaxLimmy2-7B-slerp
JupiterINEX12-12B-MoE
YamMaths-7B-slerp
YamMaths-7B-slerp is a merge of the following models using LazyMergekit: automerger/YamshadowExperiment28-7B Kukedlc/NeuralMaths-Experiment-7b Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.38| |IFEval (0-Shot) |41.48| |BBH (3-Shot) |32.13| |MATH Lvl 5 (4-Shot)| 7.48| |GPQA (0-shot) | 4.03| |MuSR (0-shot) |13.46| |MMLU-PRO (5-shot) |23.68|
Mistralmash1-7B-s
License: Apache 2.0. Tags: merge.
Ph3della-14B
HomerSlerp2-7B
A library for natural language processing tasks using the Apache 2.0 license and the Transformers library.
Marco-01-slerp5-7B
Rogerlee-2.5-7B-slerp
PrometheusLaser-7B-slerp
MistralMerge-7B-stock
StarlingDolphin-7B-slerp
RogerMerge-7B-slerp
RogerMerge-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/MultiverseEx26-7B-slerp allknowingroger/PercivalMelodias-7B-slerp
Multimerge-12B-MoE
Multimerge-Neurallaymons-12B-MoE
Neuralcoven-7B-slerp
Neuralcoven-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/Neurallaymons-7B-slerp raidhon/coven7b128korpoalpha Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.16| |IFEval (0-Shot) |38.59| |BBH (3-Shot) |33.80| |MATH Lvl 5 (4-Shot)| 6.65| |GPQA (0-shot) | 4.70| |MuSR (0-shot) |11.76| |MMLU-PRO (5-shot) |25.49|
Neuralmultiverse-7B-slerp
Neuralmultiverse-7B-slerp is a merge of the following models using LazyMergekit: allknowingroger/MultiverseEx26-7B-slerp allknowingroger/NeuralCeptrix-7B-slerp
MultiCalm-7B-slerp
Merge and mergekit.
MultiMash-12B-slerp
Merge tags include merge and mergekit.
MultiMash2-12B-slerp
This model is licensed under Apache 2.0 and is tagged for merging.
MultiMash6-12B-slerp
This model is licensed under Apache 2.0 and is tagged for merging.
Meme-7B-slerp
Tags: merge, mergekit
MultiMash8-13B-slerp
This model is licensed under the Apache 2.0 license and is tagged for merging.
MultiMash9-13B-slerp
MultiMash9-13B-slerp is a merge of the following models using LazyMergekit: zhengr/MixTAO-7Bx2-MoE-v8.1 allknowingroger/Calmex26merge-12B-MoE Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.53| |IFEval (0-Shot) |41.88| |BBH (3-Shot) |32.55| |MATH Lvl 5 (4-Shot)| 7.18| |GPQA (0-shot) | 4.03| |MuSR (0-shot) |14.21| |MMLU-PRO (5-shot) |23.33|
Ph3unsloth-3B-slerp
Ph3unsloth-3B-slerp is a merge of the following models using LazyMergekit: anandanand84/otcjsonphi3 udayakumar-cyb/meetingmodel
MistralPhi3-11B
License: Apache 2.0, Tags: merge.
Phi3mash1-17B-pass
Phi3-19B-pass is a merge of the following models using LazyMergekit: Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.35| |IFEval (0-Shot) |18.84| |BBH (3-Shot) |45.25| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 9.28| |MuSR (0-shot) |14.84| |MMLU-PRO (5-shot) |39.88|
Ph3merge-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-4k-DPO failspy/Phi-3-medium-4k-instruct-abliterated-v3 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |23.53| |IFEval (0-Shot) |27.01| |BBH (3-Shot) |48.88| |MATH Lvl 5 (4-Shot)| 0.15| |GPQA (0-shot) |11.74| |MuSR (0-shot) |13.28| |MMLU-PRO (5-shot) |40.12|
Ph3task3-14B
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base. The following models were included in the merge: jpacifico/Chocolatine-14B-Instruct-4k-DPO The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |30.21| |IFEval (0-Shot) |49.62| |BBH (3-Shot) |48.00| |MATH Lvl 5 (4-Shot)|14.58| |GPQA (0-shot) |12.19| |MuSR (0-shot) |14.95| |MMLU-PRO (5-shot) |41.90|
Jallabi-40B
orca_mini_v7_72b-F32
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: pankajmathur/orcaminiv772b The following YAML configuration was used to produce this model:
Gemmaslerp3-9B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: sam-paech/Quill-v1 sam-paech/Delirium-v1 The following YAML configuration was used to produce this model:
TheBeagle-v2beta-32B-Slerp
Qwenslerp3-17B
Marco-01-slerp2-7B
Gemma2Slerp4-27B
Apache 2.0 licensed library using transformers.
Ph3della3-14B
Qwen-modelstock2-15B
JupiterMerge-7B-slerp
DolphinChat-7B-slerp
Mistralchat-7B-slerp
NeuralDolphin-7B-slerp
StarlingMaxLimmy-7B-slerp
DelexaMultiverse-12B-MoE
RogerWizard-12B-MoE
CeptrixBeagle-12B-MoE
Qwen2.5pass-50B
Llama-3.1-Nemotron-70B-Instruct-HF-F32
Qwenslerp2-14B-F32
Qwen2.5-32B-F32
Qwenslerp1-7B
Qwen2.5-7B-task5
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: Tsunami-th/Tsunami-0.5x-7B-Instruct jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.2 The following YAML configuration was used to produce this model:
Qwen2.5-7B-task6
Qwen2.5-7B-task7
This is a merge of pre-trained language models created using mergekit. This model was merged using the task arithmetic merge method using Qwen/Qwen2.5-7B as a base. The following models were included in the merge: CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES macadeliccc/Samantha-Qwen-2-7B The following YAML configuration was used to produce this model:
QwenSlerp6-7B
QwenSlerp7-7B
QwenSlerp8-7B
HomerSlerp3-7B
Apache 2.0 licensed library using transformers.
Marco-01-slerp4-7B
Marco-01-slerp6-7B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Qwen2.5-7B-task2 AIDC-AI/Marco-o1 The following YAML configuration was used to produce this model:
Marco-01-slerp7-7B
HomerSlerp5-7B
DeepHermes-3-Llama-3-slerp-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: hotmailuser/LlamaStock-8B NousResearch/DeepHermes-3-Llama-3-8B-Preview The following YAML configuration was used to produce this model: