akjindal53244

5 models β€’ 1 total models in database
Sort by:

Llama-3.1-Storm-8B

Authors: Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha πŸ€— Hugging Face Announcement Blog: https://huggingface.co/blog/akjindal53244/llama31-storm8b We present the Llama-3.1-Storm-8B model that outperforms Meta AI's Llama-3.1-8B-Instruct and Hermes-3-Llama-3.1-8B models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps: 1. Self-Curation: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B). 2. Targeted fine-tuning: We performed Spectrum-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen. 3. Model Merging: We merged our fine-tuned model with the Llama-Spark model using SLERP method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. Llama-3.1-Storm-8B improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling. πŸ† Introducing Llama-3.1-Storm-8B Llama-3.1-Storm-8B builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class. As shown in the left subplot of the above figure, Llama-3.1-Storm-8B model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following (IFEval), Knowledge-driven QA benchmarks (GPQA, MMLU-Pro), Reasoning (ARC-C, MuSR, BBH), Reduced Hallucinations (TruthfulQA), and Function-Calling (BFCL). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources. We also benchmarked our model with the recently published model Hermes-3-Llama-3.1-8B built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark. Llama-3.1-Storm-8B Model Strengths Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore Llama-3.1-Storm-8B and look forward to seeing how it will be utilized in various projects and applications. ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%) BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%) Note: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct. Llama-3.1-Storm-8B Models 1. `BF16`: Llama-3.1-Storm-8B 2. ⚑ `FP8`: Llama-3.1-Storm-8B-FP8-Dynamic 3. ⚑ `GGUF`: Llama-3.1-Storm-8B-GGUF 4. πŸš€ Ollama: `ollama run ajindal/llama3.1-storm:8b` πŸ’» How to Use the Model The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the Llama-3.1-Storm-8B checkpoint, so it’s the recommended way to run to ensure the best results. Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples: Conversational Use-case Use with πŸ€— Transformers Using `transformers.pipeline()` API Llama-3.1-Storm-8B has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark. Prompt Format for Function Calling Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling: Above system prompt should be used with passing `LISTOFTOOLS` as input. Alignment Note While Llama-3.1-Storm-8B did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model. Support Our Work With 3 team-members spanned across 3 different time-zones, we have won NeurIPS LLM Efficiency Challenge 2023 and 4 other competitions in Finance and Arabic LLM space. We have also published SOTA mathematical reasoning model. Llama-3.1-Storm-8B is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. We're seeking both computational resources and innovative collaborators to drive this initiative forward. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |29.84| |IFEval (0-Shot) |80.51| |BBH (3-Shot) |31.49| |MATH Lvl 5 (4-Shot)|16.62| |GPQA (0-shot) |10.18| |MuSR (0-shot) | 9.12| |MMLU-PRO (5-shot) |31.15|

NaNK
llama
2,559
175

Arithmo-Mistral-7B

NaNK
license:apache-2.0
713
62

Mistral-7B-v0.1-Open-Platypus

NaNK
license:apache-2.0
701
8

Llama-3.1-Storm-8B-GGUF

NaNK
llama
387
41

Llama-3.1-Storm-8B-FP8-Dynamic

NaNK
llama
4
14