ajibawa-2023

32 models • 1 total models in database
Sort by:

SlimOrca-13B

NaNK
llama
722
11

Uncensored-Frank-33B

NaNK
llama
721
7

carl-7b

NaNK
llama
720
6

Python-Code-13B

NaNK
llama
716
7

carl-13b

NaNK
llama
716
6

Uncensored-Jordan-13B

NaNK
llama
715
7

scarlett-7b

NaNK
llama
715
4

Uncensored-Jordan-7B

NaNK
llama
713
5

Code-13B

NaNK
llama
710
13

Uncensored-Frank-7B

NaNK
llama
710
5

scarlett-13b

NaNK
llama
710
3

Python-Code-33B

NaNK
llama
708
8

carl-llama-2-13b

NaNK
llama
707
11

Uncensored-Jordan-33B

NaNK
llama
706
7

Uncensored-Frank-13B

NaNK
llama
705
8

carl-33b

NaNK
llama
704
10

scarlett-33b

NaNK
llama
702
25

Code-33B

NaNK
llama
691
7

Code-290k-13B

NaNK
llama
569
8

Young Children Storyteller Mistral 7B

This model is based on my dataset Children-Stories-Collection which has over 0.9 million stories meant for Young Children (age 6 to 12). Drawing upon synthetic datasets meticulously designed with the developmental needs of young children in mind, Young-Children-Storyteller is more than just a tool—it's a companion on the journey of discovery and learning. With its boundless storytelling capabilities, this model serves as a gateway to a universe brimming with wonder, adventure, and endless possibilities. Whether it's embarking on a whimsical adventure with colorful characters, unraveling mysteries in far-off lands, or simply sharing moments of joy and laughter, Young-Children-Storyteller fosters a love for language and storytelling from the earliest of ages. Through interactive engagement and age-appropriate content, it nurtures creativity, empathy, and critical thinking skills, laying a foundation for lifelong learning and exploration. Rooted in a vast repository of over 0.9 million specially curated stories tailored for young minds, Young-Children-Storyteller is poised to revolutionize the way children engage with language and storytelling. Kindly note this is qLoRA version, another exception. Special Thanks to MarsupialAI for quantizing the model. Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took more than 30 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral-7B-v0.1. You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |71.08| |AI2 Reasoning Challenge (25-Shot)|68.69| |HellaSwag (10-Shot) |84.67| |MMLU (5-Shot) |64.11| |TruthfulQA (0-shot) |62.62| |Winogrande (5-shot) |81.22| |GSM8k (5-shot) |65.20|

NaNK
license:apache-2.0
357
23

Code-Llama-3-8B

NaNK
llama
11
31

Uncensored-Frank-Llama-3-8B

NaNK
llama
11
13

General-Stories-Mistral-7B

NaNK
license:apache-2.0
10
5

Code-Mistral-7B

NaNK
license:apache-2.0
9
15

Scarlett-Llama-3-8B-v1.0

NaNK
llama
9
5

SlimOrca-Llama-3-8B

NaNK
llama
4
4

Scarlett-Llama-3-8B

NaNK
llama
2
8

Code-Jamba-v0.1

license:apache-2.0
1
7

Code-290k-6.7B-Instruct

NaNK
llama
1
6

OpenHermes-2.5-Code-290k-13B

NaNK
llama
0
11

Scarlett-Phi

license:cc-by-nc-nd-4.0
0
8

WikiHow-Mistral-Instruct-7B

NaNK
license:apache-2.0
0
7