KETI-AIR
ke-t5-large
Ke T5 Base
The developers of the Text-To-Text Transfer Transformer (T5) write: > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Base is the checkpoint with 220 million parameters. - Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. - Shared by [Optional]: Korea Electronics Technology Institute Artificial Intelligence Research Center - Model type: Text Generation - Language(s) (NLP):More information needed - License: More information needed - Related Models: - Parent Model: T5 - Resources for more information: - GitHub Repo - KE-T5 Github Repo - Paper - Associated Paper - Blog Post The developers write in a blog post that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.). See the t5-base model card for further information. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-Base, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: Google Cloud TPU Pods - Hours used: More information needed - Cloud Provider: GCP - Compute Region: More information needed - Carbon Emitted: More information needed APA: - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.