ssmits

52 models • 1 total models in database
Sort by:

Qwen2-7B-Instruct-embed-base

NaNK
license:apache-2.0
11,891
4

Falcon2-5.5B-Polish

NaNK
license:apache-2.0
1,299
1

Qwen2.5-7B-embed-base

NaNK
license:apache-2.0
289
1

Qwen2.5-7B-Instruct-embed-base

NaNK
license:apache-2.0
125
1

Falcon2-5.5B-Czech-GGUF

NaNK
llama-cpp
30
0

Falcon2-5.5B-Swedish-GGUF

NaNK
llama-cpp
21
0

Phi-3-medium-4k-instruct-Q4_K_M-GGUF

llama-cpp
20
1

Falcon2-5.5B-Portuguese-GGUF

NaNK
llama-cpp
19
1

Falcon2-5.5B-Italian-GGUF

NaNK
llama-cpp
16
0

Falcon2-5.5B-Polish-GGUF

NaNK
llama-cpp
16
0

Phi-3-medium-128k-instruct-Q4_K_M-GGUF

llama-cpp
15
1

Falcon2-5.5B-French-GGUF

NaNK
llama-cpp
13
0

Falcon2-5.5B-multilingual-GGUF

NaNK
llama-cpp
13
0

Falcon2-5.5B-Dutch-GGUF

NaNK
—
11
1

Falcon2-5.5B-Romanian-GGUF

NaNK
llama-cpp
11
0

Qwen2.5-95B-Instruct

Qwen2.5-95B-Instruct is a Qwen/Qwen2.5-72B-Instruct self-merge made with MergeKit. The layer ranges chosen for this merge were inspired by a rough estimate of the layer similarity analysis of ssmits/Falcon2-5.5B-multilingual. Layer similarity analysis involves examining the outputs of different layers in a neural network to determine how similar or different they are. This technique can help identify which layers contribute most significantly to the model's performance. In the context of the Falcon-11B model, layer similarity analysis across multiple languages revealed that the first half of the layers were more important for maintaining performance. Additionally, this analysis can be used to more rigidly slice and add extra layers for optimal Next Token Prediction, allowing for possibly a model architecture that's more creative and powerful. - alpindale/goliath-120b - cognitivecomputations/MegaDolphin-120b - mlabonne/Meta-Llama-3-120B-Instruct Special thanks to Eric Hartford for both inspiring and evaluating the original model, to Charles Goddard for creating MergeKit, and to Mathieu Labonne for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge. This model is probably good for creative writing tasks. It uses the Qwen chat template with a default context window of 128K. The model could be quite creative and maybe even better than the 72B model at some tasks. GGUF: [Link to GGUF model] EXL2: [Link to EXL2 model] mlx: [Link to mlx model] šŸ† Evaluation This model has yet to be thoroughly evaluated. It is expected to excel in creative writing and more but may have limitations in other tasks. Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases. Once the model is created and tested, this section will be updated with: Links to evaluation threads on social media platforms Examples of the model's performance in creative writing tasks Comparisons with other large language models in various applications Community feedback and use cases We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations. Initial benchmarks show interesting performance characteristics compared to the 72B model: Strengths The 95B model shows notable improvements in: 1. Mathematical Reasoning - Up to 5.83x improvement in algebra tasks - 3.33x improvement in pre-algebra - Consistent gains across geometry, number theory, and probability tasks - Overall stronger performance in complex mathematical reasoning 2. Spatial & Object Understanding - 11% improvement in object placement tasks - 7% better at tabular data interpretation - Enhanced performance in logical deduction with multiple objects 3. Complex Language Tasks - 4% improvement in disambiguation tasks - 2% better at movie recommendations - Slight improvements in hyperbaton (complex word order) tasks 4. Creative & Analytical Reasoning - 10% improvement in murder mystery solving - Better performance in tasks requiring creative problem-solving Areas for Consideration While the model shows improvements in specific areas, users should note that the 72B model still performs better in many general language and reasoning tasks. The 95B version appears to excel particularly in mathematical and spatial reasoning while maintaining comparable performance in other areas. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |37.43| |IFEval (0-Shot) |84.31| |BBH (3-Shot) |58.53| |MATH Lvl 5 (4-Shot)| 6.04| |GPQA (0-shot) |15.21| |MuSR (0-shot) |13.61| |MMLU-PRO (5-shot) |46.85| | Key | 72b Result | 95b Result | Difference | Which is Higher | Multiplier | |:--------------------------------------------------------------------------|-------------:|-------------:|-------------:|:------------------|:-------------| | leaderboardmusr.accnorm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 | | leaderboardbbhsportsunderstanding.accnorm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 | | leaderboardbbhlogicaldeductionthreeobjects.accnorm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 | | leaderboardmathgeometryhard.exactmatch,none | 0 | 0.008 | 0.008 | 95b | 0.00 | | leaderboardgpqa.accnorm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 | | leaderboardmathhard.exactmatch,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 | | leaderboard.exactmatch,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 | | leaderboard.promptlevellooseacc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 | | leaderboard.promptlevelstrictacc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 | | leaderboard.instlevellooseacc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 | | leaderboard.accnorm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 | | leaderboard.instlevelstrictacc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 | | leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 | | leaderboardbbhcausaljudgement.accnorm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 | | leaderboardbbhsalienttranslationerrordetection.accnorm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 | | leaderboardgpqaextended.accnorm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 | | leaderboardmathprealgebrahard.exactmatch,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 | | leaderboardmathalgebrahard.exactmatch,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 | | leaderboardbbhbooleanexpressions.accnorm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 | | leaderboardmathnumtheoryhard.exactmatch,none | 0 | 0.058 | 0.058 | 95b | 0.00 | | leaderboardbbhmovierecommendation.accnorm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 | | leaderboardmathcountingandprobhard.exactmatch,none | 0 | 0.024 | 0.024 | 95b | 0.00 | | leaderboardmathintermediatealgebrahard.exactmatch,none | 0 | 0.004 | 0.004 | 95b | 0.00 | | leaderboardifeval.promptlevelstrictacc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 | | leaderboardifeval.instlevelstrictacc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 | | leaderboardifeval.instlevellooseacc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 | | leaderboardifeval.promptlevellooseacc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 | | leaderboardbbhsnarks.accnorm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 | | leaderboardbbhweboflies.accnorm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 | | leaderboardbbhpenguinsinatable.accnorm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 | | leaderboardbbhhyperbaton.accnorm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 | | leaderboardbbhobjectcounting.accnorm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 | | leaderboardmusrobjectplacements.accnorm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 | | leaderboardbbhlogicaldeductionfiveobjects.accnorm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 | | leaderboardmusrteamallocation.accnorm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 | | leaderboardbbhnavigate.accnorm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 | | leaderboardbbhtrackingshuffledobjectssevenobjects.accnorm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 | | leaderboardbbhformalfallacies.accnorm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 | | all.leaderboardmusr.accnorm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 | | all.leaderboardbbhsportsunderstanding.accnorm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 | | all.leaderboardbbhlogicaldeductionthreeobjects.accnorm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 | | all.leaderboardmathgeometryhard.exactmatch,none | 0 | 0.008 | 0.008 | 95b | 0.00 | | all.leaderboardgpqa.accnorm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 | | all.leaderboardmathhard.exactmatch,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 | | all.leaderboard.exactmatch,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 | | all.leaderboard.promptlevellooseacc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 | | all.leaderboard.promptlevelstrictacc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 | | all.leaderboard.instlevellooseacc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 | | all.leaderboard.accnorm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 | | all.leaderboard.instlevelstrictacc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 | | all.leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 | | all.leaderboardbbhcausaljudgement.accnorm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 | | all.leaderboardbbhsalienttranslationerrordetection.accnorm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 | | all.leaderboardgpqaextended.accnorm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 | | all.leaderboardmathprealgebrahard.exactmatch,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 | | all.leaderboardmathalgebrahard.exactmatch,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 | | all.leaderboardbbhbooleanexpressions.accnorm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 | | all.leaderboardmathnumtheoryhard.exactmatch,none | 0 | 0.058 | 0.058 | 95b | 0.00 | | all.leaderboardbbhmovierecommendation.accnorm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 | | all.leaderboardmathcountingandprobhard.exactmatch,none | 0 | 0.024 | 0.024 | 95b | 0.00 | | all.leaderboardmathintermediatealgebrahard.exactmatch,none | 0 | 0.004 | 0.004 | 95b | 0.00 | | all.leaderboardifeval.promptlevelstrictacc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 | | all.leaderboardifeval.instlevelstrictacc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 | | all.leaderboardifeval.instlevellooseacc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 | | all.leaderboardifeval.promptlevellooseacc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 | | all.leaderboardbbhsnarks.accnorm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 | | all.leaderboardbbhweboflies.accnorm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 | | all.leaderboardbbhpenguinsinatable.accnorm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 | | all.leaderboardbbhhyperbaton.accnorm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 | | all.leaderboardbbhobjectcounting.accnorm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 | | all.leaderboardmusrobjectplacements.accnorm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 | | all.leaderboardbbhlogicaldeductionfiveobjects.accnorm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 | | all.leaderboardmusrteamallocation.accnorm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 | | all.leaderboardbbhnavigate.accnorm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 | | all.leaderboardbbhtrackingshuffledobjectssevenobjects.accnorm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 | | all.leaderboardbbhformalfallacies.accnorm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 | | all.leaderboardgpqamain.accnorm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 | | all.leaderboardbbhdisambiguationqa.accnorm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 | | all.leaderboardbbhtrackingshuffledobjectsfiveobjects.accnorm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 | | all.leaderboardbbhdateunderstanding.accnorm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 | | all.leaderboardbbhgeometricshapes.accnorm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 | | all.leaderboardbbhreasoningaboutcoloredobjects.accnorm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 | | all.leaderboardmusrmurdermysteries.accnorm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 | | all.leaderboardbbhruinnames.accnorm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 | | all.leaderboardbbhlogicaldeductionsevenobjects.accnorm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 | | all.leaderboardbbh.accnorm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 | | all.leaderboardbbhtemporalsequences.accnorm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 | | all.leaderboardmmlupro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 | | leaderboardgpqamain.accnorm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 | | leaderboardbbhdisambiguationqa.accnorm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 | | leaderboardbbhtrackingshuffledobjectsfiveobjects.accnorm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 | | leaderboardbbhdateunderstanding.accnorm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 | | leaderboardbbhgeometricshapes.accnorm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 | | leaderboardbbhreasoningaboutcoloredobjects.accnorm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 | | leaderboardmusrmurdermysteries.accnorm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 | | leaderboardbbhruinnames.accnorm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 | | leaderboardbbhlogicaldeductionsevenobjects.accnorm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 | | leaderboardbbh.accnorm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 | | leaderboardbbhtemporalsequences.accnorm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 | | leaderboardmmlupro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |

NaNK
—
10
4

Falcon2-5.5B-Norwegian-GGUF

NaNK
llama-cpp
10
0

Phi-3-medium-128k-instruct-Q5_K_M-GGUF

llama-cpp
9
2

Falcon2-5.5B-Romanian

NaNK
license:apache-2.0
9
1

Falcon2-5.5B-Danish-GGUF

NaNK
llama-cpp
9
0

Falcon2-5.5B-Spanish-GGUF

NaNK
llama-cpp
9
0

Falcon2-5.5B-German-GGUF

NaNK
llama-cpp
8
0

Phi-3-medium-4k-instruct-Q5_K_M-GGUF

llama-cpp
7
0

Zamba2-1.2B-instruct-Dutch

NaNK
license:apache-2.0
6
1

Phi-3-medium-128k-instruct-Q8_0-GGUF

llama-cpp
5
3

Llama-3.1-Nemotron-92B-Instruct-HF-late

NaNK
llama
5
2

Phi-3-medium-4k-instruct-Q3_K_M-GGUF

llama-cpp
5
0

Qwen2-7B-embed-base

NaNK
license:apache-2.0
5
0

Falcon2-5.5B-multilingual

NaNK
license:apache-2.0
4
3

Phi-3-medium-128k-instruct-Q6_K-GGUF

llama-cpp
4
1

Phi-3-medium-4k-instruct-Q8_0-GGUF

llama-cpp
4
1

Falcon2-5.5B-German

NaNK
license:apache-2.0
4
0

Falcon2-5.5B-Danish

NaNK
license:apache-2.0
4
0

Qwen2.5-125B-Instruct

NaNK
base_model:ssmits/Qwen2.5-125B-Instruct
4
0

Phi-3-medium-128k-instruct-Q2_K-GGUF

llama-cpp
3
0

Falcon2-8B-multilingual

NaNK
base_model:ssmits/Falcon2-5.5B-multilingual
3
0

Falcon2-8B-Norwegian

NaNK
base_model:ssmits/Falcon2-5.5B-Norwegian
3
0

Falcon2-5.5B-multilingual-embed-base

NaNK
ssmits/Falcon2-5.5B-multilingual
3
0

Falcon2-5.5B-Swedish

NaNK
license:apache-2.0
2
0

Falcon2-nano-test

NaNK
—
2
0

Llama-3.1-Nemotron-92B-Instruct-HF-early

NaNK
llama
1
2

Falcon2-5.5B-Dutch

NaNK
license:apache-2.0
1
1

Falcon2-5.5B-French

NaNK
license:apache-2.0
1
0

Falcon2-5.5B-Czech

NaNK
license:apache-2.0
1
0

Falcon2-5.5B-Spanish

NaNK
license:apache-2.0
1
0

Falcon2-8B-Danish

NaNK
base_model:ssmits/Falcon2-5.5B-Danish
1
0

Falcon2-8B-Czech

NaNK
base_model:ssmits/Falcon2-5.5B-Czech
1
0

Falcon2-mini-test

NaNK
—
1
0

Falcon2-tiny-test

NaNK
—
1
0

Zamba2-1.2B

NaNK
license:apache-2.0
1
0

Falcon2-5.5B-Norwegian

NaNK
license:apache-2.0
0
1

ModernBERT-base-dutch-test

—
0
1