Emilio407
nllb-200-3.3B-8bit
nllb-200-3.3B-4bit
Qwen2-1.5B-Instruct-GGUF
dolphin-2.9.2-qwen2-7b-GGUF
madlad400-3b-mt-8bit
Qwen2-7B-Instruct-GGUF
dolphin-2.9.3-qwen2-1.5b-GGUF
gemma-2-9b-it-abliterated-GGUF
Qwen2-0.5B-Instruct-Abliterated-GGUF
Qwen2-0.5B-Instruct-GGUF
Qwen2-1.5B-Instruct-Abliterated-GGUF
guarani-jopara-llama-3.1-8B-instruct-v1-GGUF
prostate-mri-T2w-v03
guarani-jopara-Qwen2-0.5B-Instruct-v1-GGUF
dolphin-2.9.1-llama-3-8b-GGUF
dolphin-2.9.3-qwen2-0.5b-GGUF
guarani-jopara-gemma-2-2b-it-v1-GGUF
Nllb 200 Distilled 600M 4bit
Dolphin3.0-Qwen2.5-0.5B-GRPO-V2
nllb-200-1.3B-8bit
Llama-3.2-3B-Instruct-Jopara-V2-GGUF
Dolphin3.0-Qwen2.5-0.5B-GRPO-V1-GGUF
- Developed by: Emilio407 - License: apache-2.0 - Finetuned from model : cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.
SmolLM2-135M-Instruct-Reasoner-V1-GGUF
nllb-200-distilled-600M-8bit
SmolLM2-360M-Instruct-Reasoner-V1-GGUF
Llama-3-8B-Instruct-Gradient-4194k-GGUF
granite-3b-code-instruct-GGUF
nllb-200-distilled-1.3B-8bit
stablelm-2-1_6b-chat-GGUF
TinyLlama-1.1B-Chat-v1.0-GGUF
Dolphin3.0-Qwen2.5-0.5B-GRPO-V1
madlad400-10b-mt-4bit
0. TL;DR 1. Model Details 2. Usage 3. Uses 4. Bias, Risks, and Limitations 5. Training Details 6. Evaluation 7. Environmental Impact 8. Citation MADLAD-400-10B-MT is a multilingual machine translation model based on the T5 architecture that was trained on 250 billion tokens covering over 450 languages using publicly available data. It is competitive with models that are significantly larger. Disclaimer: Juarez Bochi, who was not involved in this research, converted the original weights and wrote the contents of this model card based on the original paper and Flan-T5. - Model type: Language model - Language(s) (NLP): Multilingual (400+ languages) - License: Apache 2.0 - Related Models: All MADLAD-400 Checkpoints - Original Checkpoints: All Original MADLAD-400 Checkpoints - Resources for more information: - Research paper - GitHub Repo - Hugging Face MADLAD-400 Docs (Similar to T5) - Pending PR Find below some example scripts on how to use the model: First, install the Python packages that are required: `pip install transformers accelerate sentencepiece` > Primary intended uses: Machine Translation and multilingual NLP tasks on over 400 languages. > Primary intended users: Research community. > These models are trained on general domain data and are therefore not meant to > work on domain-specific models out-of-the box. Moreover, these research models have not been assessed > for production usecases. > We note that we evaluate on only 204 of the languages supported by these models and on machine translation > and few-shot machine translation tasks. Users must consider use of this model carefully for their own > usecase. > We trained these models with MADLAD-400 and publicly available data to create baseline models that > support NLP for over 400 languages, with a focus on languages underrepresented in large-scale corpora. > Given that these models were trained with web-crawled datasets that may contain sensitive, offensive or > otherwise low-quality content despite extensive preprocessing, it is still possible that these issues to the > underlying training data may cause differences in model performance and toxic (or otherwise problematic) > output for certain domains. Moreover, large models are dual use technologies that have specific risks > associated with their use and development. We point the reader to surveys such as those written by > Weidinger et al. or Bommasani et al. for a more detailed discussion of these risks, and to Liebling > et al. for a thorough discussion of the risks of machine translation systems. > We train models of various sizes: a 3B, 32-layer parameter model, > a 7.2B 48-layer parameter model and a 10.7B 32-layer parameter model. > We share all parameters of the model across language pairs, > and use a Sentence Piece Model with 256k tokens shared on both the encoder and decoder > side. Each input sentence has a token prepended to the source sentence to indicate the target > language. > For both the machine translation and language model, MADLAD-400 is used. For the machine translation > model, a combination of parallel datasources covering 157 languages is also used. Further details are > described in the paper. > For evaluation, we used WMT, NTREX, Flores-200 and Gatones datasets as described in Section 4.3 in the paper. > The translation quality of this model varies based on language, as seen in the paper, and likely varies on > domain, though we have not assessed this.