migtissera
Tess-2.0-Llama-3-8B
Tess-2.0-Llama-3-8B Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Llama-3-8B was trained on the meta-llama/Meta-Llama-3-8B base. Compute for Tess-2.0-Llama-3-8B was sponsored by KindoAI. Prompt Format Prompt format used for this fine-tune is Llama-3 Training Methodology Tess-2.0-Llama-3 was trained on the (still curating) Tess-2.0 dataset. Tess-2.0 dataset contains ~100K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions. The model was only fine-tuned for 1-epoch with a low learning rate to try and preserve its entropy as much as possible. Join My General AI Discord (NeuroLattice): https://discord.gg/Hz6GrwGFKD While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.