Pentium95

3 models • 2 total models in database
Sort by:

H34v7 DXP Zero V1.0 24b Small IMatrix GGUF

Imatrix GGUF Quants for: DXP-Zero-V1.0-24b-Small-Instruct. IQ4XS: It's all you need, if you have 16+ GB RAM/VRAM The model might lack the necessary evil for making story twisty or dark adventure but it make ammend on creating coherent story in long context form. Perfect for romance, adventure, sci-fi, and even general purpose. So i was browsing for Mistral finetune and found this base model by ZeroAgency, and oh boy... It was perfect! So here are few notable improvements i observed. Pros: Increased output for storytelling or roleplay. Dynamic output (it can adjust how much output, i mean like when you go with shorter prompt it will do smaller outputs and so does with longer prompt more output too). Less repetitive (though it depends on your own prompt and settings). I have tested with 49444/65536 tokens no degradation although i notice it's actually learning the context better and it's impacting the output a lot. (what i don't like is, it's learning the previous context(of turns) too quickly and set it as new standards.). This model was merged using the TIES merge method using ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf as a base. Models Merged: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b Gryphe/Pantheon-RP-1.8-24b-Small-3.1

NaNK
license:apache-2.0
448
1

SmolLM3 3B Instruct Anime

This model is a fine-tuned version of HuggingFaceTB/SmolLM3-3B-Base. It was trained using zerofata/Instruct-Anime. - PEFT 0.17.1 - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.1.1 - Tokenizers: 0.22.1

NaNK
15
1

SmolLMathematician-3B

This model is a fine-tuned version of HuggingFaceTB/SmolLM3-3B-Base. It has been trained using TRL on TIGER-Lab/MathInstruct. - PEFT 0.17.1 - TRL: 0.23.0 - Transformers: 4.56.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.1.1 - Tokenizers: 0.22.1

NaNK
0
1