ajibawa-2023
SlimOrca-13B
Uncensored-Frank-33B
carl-7b
Python-Code-13B
carl-13b
Uncensored-Jordan-13B
scarlett-7b
Uncensored-Jordan-7B
Code-13B
Uncensored-Frank-7B
scarlett-13b
Python-Code-33B
carl-llama-2-13b
Uncensored-Jordan-33B
Uncensored-Frank-13B
carl-33b
scarlett-33b
Code-33B
Code-290k-13B
Young Children Storyteller Mistral 7B
This model is based on my dataset Children-Stories-Collection which has over 0.9 million stories meant for Young Children (age 6 to 12). Drawing upon synthetic datasets meticulously designed with the developmental needs of young children in mind, Young-Children-Storyteller is more than just a tool—it's a companion on the journey of discovery and learning. With its boundless storytelling capabilities, this model serves as a gateway to a universe brimming with wonder, adventure, and endless possibilities. Whether it's embarking on a whimsical adventure with colorful characters, unraveling mysteries in far-off lands, or simply sharing moments of joy and laughter, Young-Children-Storyteller fosters a love for language and storytelling from the earliest of ages. Through interactive engagement and age-appropriate content, it nurtures creativity, empathy, and critical thinking skills, laying a foundation for lifelong learning and exploration. Rooted in a vast repository of over 0.9 million specially curated stories tailored for young minds, Young-Children-Storyteller is poised to revolutionize the way children engage with language and storytelling. Kindly note this is qLoRA version, another exception. Special Thanks to MarsupialAI for quantizing the model. Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took more than 30 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral-7B-v0.1. You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |71.08| |AI2 Reasoning Challenge (25-Shot)|68.69| |HellaSwag (10-Shot) |84.67| |MMLU (5-Shot) |64.11| |TruthfulQA (0-shot) |62.62| |Winogrande (5-shot) |81.22| |GSM8k (5-shot) |65.20|