shadowml
BeagSake-7B
BeagSake-7B is a merge of the following models using LazyMergekit: shadowml/BeagleSempra-7B shadowml/WestBeagle-7B Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |75.38| |AI2 Reasoning Challenge (25-Shot)|72.44| |HellaSwag (10-Shot) |88.39| |MMLU (5-Shot) |65.23| |TruthfulQA (0-shot) |72.27| |Winogrande (5-shot) |82.16| |GSM8k (5-shot) |71.80|
Mixolar-4x7b
This model is a Mixure of Experts (MoE) made with mergekit (mixtral branch). It uses the following base models: kyujinpy/Sakura-SOLAR-Instruct jeonsworld/CarbonVillain-en-10.7B-v1 rishiraj/meow kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2 Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |74.18| |AI2 Reasoning Challenge (25-Shot)|71.08| |HellaSwag (10-Shot) |88.44| |MMLU (5-Shot) |66.29| |TruthfulQA (0-shot) |71.81| |Winogrande (5-shot) |83.58| |GSM8k (5-shot) |63.91|