ByT5-Small-fine-tuned2
55
—
by
savinugunarathna
Language Model
OTHER
New
55 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Training Data Analysis
🔵 Good (6.0/10)
Researched training datasets used by ByT5-Small-fine-tuned2 with quality assessment
Specialized For
general
multilingual
Training Datasets (1)
c4
🔵 6/10
general
multilingual
Key Strengths
- •Scale and Accessibility: 750GB of publicly available, filtered text
- •Systematic Filtering: Documented heuristics enable reproducibility
- •Language Diversity: Despite English-only, captures diverse writing styles
Considerations
- •English-Only: Limits multilingual applications
- •Filtering Limitations: Offensive content and low-quality text remain despite filtering
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Usagepythontransformers
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("savinugunarathna/ByT5-Small-fine-tuned2")
model = AutoModelForSeq2SeqLM.from_pretrained("savinugunarathna/ByT5-Small-fine-tuned2")
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device).eval()
def translate(text: str) -> str:
prompt = f"translate Singlish to Sinhala: {text}"
inputs = tokenizer(prompt, return_tensors="pt", max_length=512, truncation=True).to(device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=128, num_beams=4)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Interactive loop — type 0 to exit
while True:
user_input = input("Singlish: ").strip()
if user_input == "0":
break
if user_input:
print(f"Sinhala: {translate(user_input)}\n")Citationbibtex
@misc{gunarathna2025byt5singlish,
title={ByT5-Small Singlish to Sinhala: A Three-Phase Curriculum Approach with LoRA Fine-Tuning},
author={Gunarathna, Savinu},
year={2025},
howpublished={Hugging Face Model Hub},
note={\url{https://huggingface.co/savinugunarathna/ByT5-Small-fine-tuned2}}
}Referencesbibtex
@article{sumanathilaka2025swa,
title={Swa-bhasha Resource Hub: Romanized Sinhala to Sinhala Transliteration Systems and Data Resources},
author={Sumanathilaka, Deshan and Perera, Sameera and Dharmasiri, Sachithya and Athukorala, Maneesha and Herath, Anuja Dilrukshi and Dias, Rukshan and Gamage, Pasindu and Weerasinghe, Ruvan and Priyadarshana, YHPP},
journal={arXiv preprint arXiv:2507.09245},
year={2025}
}
@inproceedings{Nsina2024,
author={Hettiarachchi, Hansi and Premasiri, Damith and Uyangodage, Lasitha and Ranasinghe, Tharindu},
title={{NSINA: A News Corpus for Sinhala}},
booktitle={The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
year={2024},
month={May},
}
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.