Davlan

91 models • 10 total models in database
Sort by:

bert-base-multilingual-cased-ner-hrl

--- license: afl-3.0 --- Hugging Face's logo --- language: - ar - de - en - es - fr - it - lv - nl - pt - zh - multilingual

216,763
75

xlm-roberta-base-ner-hrl

122,008
25

distilbert-base-multilingual-cased-ner-hrl

58,851
83

xlm-roberta-large-ner-hrl

55,105
13

xlm-roberta-base-wikiann-ner

9,502
6

afro-xlmr-large-76L

license:mit
2,052
4

afro-xlmr-base

license:mit
1,765
8

afro-xlmr-large

license:mit
1,158
11

oyo-bert-base

license:apache-2.0
622
0

oyo-mt-bert-large

license:apache-2.0
577
0

bert-base-multilingual-cased-finetuned-amharic

249
2

bert-base-multilingual-cased-finetuned-swahili

96
3

afro-xlmr-large-114L

license:mit
82
0

afrisenti-twitter-sentiment-afroxlmr-large

license:apache-2.0
75
0

afro-xlmr-small

license:mit
60
1

M2m100 418M Eng Yor Mt

Hugging Face's logo m2m100418M-eng-yor-mt Model description m2m100418M-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned facebook/m2m100418M model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a facebook/m2m100418M model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on JW300 corpus and Menyo-20k dataset Training procedure This model was trained on NVIDIA V100 GPU Eval results on Test set (BLEU score) Fine-tuning m2m100418M achieves 13.39 BLEU on Menyo-20k test set while mt5-base achieves 9.82

40
1

naija-twitter-sentiment-afriberta-large

37
4

afro-xlmr-mini

license:mit
35
2

Byt5 Base Eng Yor Mt

Hugging Face's logo byt5-base-eng-yor-mt Model description byt5-base-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a byt5-base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on on JW300 corpus and Menyo-20k dataset Training procedure This model was trained on NVIDIA V100 GPU Eval results on Test set (BLEU score) Fine-tuning byt5-base achieves 12.23 BLEU on Menyo-20k test set while mt5-base achieves 9.82

31
2

xlm-roberta-base-finetuned-amharic

30
1

Mbart50 Large Yor Eng Mt

Hugging Face's logo mbart50-large-yor-eng-mt Model description mbart50-large-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a mbart-large-50 model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. The model was trained using Swahili(swKE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the swKE for language code when evaluating the model. Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on on JW300 corpus and Menyo-20k dataset Training procedure This model was trained on NVIDIA V100 GPU Eval results on Test set (BLEU score) Fine-tuning mbart50-large achieves 15.88 BLEU on Menyo-20k test set while mt5-base achieves 15.57

30
0

xlm-roberta-base-finetuned-swahili

20
3

Bert Base Multilingual Cased Finetuned Yoruba

Hugging Face's logo bert-base-multilingual-cased-finetuned-yoruba Model description bert-base-multilingual-cased-finetuned-yoruba is a Yoruba BERT model obtained by fine-tuning bert-base-multilingual-cased model on Yorùbá language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Yorùbá corpus. Intended uses & limitations How to use You can use this model with Transformers pipeline for masked token prediction. Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on Bible, JW300, Menyo-20k, Yoruba Embedding corpus and CC-Aligned, Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. Training procedure This model was trained on a single NVIDIA V100 GPU Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | yobert F1 -|-|- MasakhaNER | 78.97 | 82.58 BBC Yorùbá Textclass | 75.13 | 79.11

20
1

MT5 Base Yoruba Adr

Hugging Face's logo mT5baseyorubaadr Model description mT5baseyorubaadr is a automatic diacritics restoration model for Yorùbá language based on a fine-tuned mT5-base model. It achieves the state-of-the-art performance for adding the correct diacritics or tonal marks to Yorùbá texts. Specifically, this model is a mT5base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k Intended uses & limitations How to use You can use this model with Transformers pipeline for ADR. Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on on JW300 Yorùbá corpus and Menyo-20k dataset Training procedure This model was trained on a single NVIDIA V100 GPU Eval results on Test set (BLEU score) 64.63 BLEU on Global Voices test set 70.27 BLEU on Menyo-20k test set BibTeX entry and citation info By Jesujoba Alabi and David Adelani

20
0

M2m100 418M Yor Eng Mt

Hugging Face's logo m2m100418M-eng-yor-mt Model description m2m100418M-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned facebook/m2m100418M model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a facebook/m2m100418M model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on JW300 corpus and Menyo-20k dataset Training procedure This model was trained on NVIDIA V100 GPU Eval results on Test set (BLEU score) Fine-tuning m2m100418M achieves 16.76 BLEU on Menyo-20k test set while mt5-base achieves 15.57

17
0

xlm-roberta-base-finetuned-yoruba

15
1

mt5_base_yor_eng_mt

15
0

xlm-roberta-large-finetuned-hausa

license:mit
15
0

xlm-roberta-base-finetuned-lingala

license:apache-2.0
14
1

bert-base-multilingual-cased-finetuned-hausa

12
1

xlm-roberta-base-finetuned-arabic

license:mit
12
1

Mt5 Base Eng Yor Mt

Hugging Face's logo mT5baseengyormt Model description mT5baseyorengmt is a machine translation model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a mT5base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k Intended uses & limitations How to use You can use this model with Transformers pipeline for MT. Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on on JW300 corpus and Menyo-20k dataset Training procedure This model was trained on a single NVIDIA V100 GPU Eval results on Test set (BLEU score) 9.82 BLEU on Menyo-20k test set

12
0

xlm-roberta-base-finetuned-shona

license:apache-2.0
11
3

Byt5 Base Yor Eng Mt

Hugging Face's logo byt5-base-yor-eng-mt Model description byt5-base-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned byt5-base model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a byt5-base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on on JW300 corpus and Menyo-20k dataset Training procedure This model was trained on NVIDIA V100 GPU Eval results on Test set (BLEU score) Fine-tuning byt5-base achieves 14.05 BLEU on Menyo-20k test set while mt5-base achieves 15.57

11
2

afro-xlmr-large-29L

license:mit
11
0

Mbart50 Large Eng Yor Mt

Hugging Face's logo mbart50-large-eng-yor-mt Model description mbart50-large-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a mbart-large-50 model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. The model was trained using Swahili(swKE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the swKE for language code when evaluating the model. Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. Training data This model was fine-tuned on on JW300 corpus and Menyo-20k dataset Training procedure This model was trained on NVIDIA V100 GPU Eval results on Test set (BLEU score) Fine-tuning mbarr50-large achieves 13.39 BLEU on Menyo-20k test set while mt5-base achieves 9.82

11
0

afro-xlmr-large-61L

license:mit
10
7

xlm-roberta-large-masakhaner

10
2

kano-bert-base

license:apache-2.0
9
0

bert-base-multilingual-cased-masakhaner

8
4

naija-bert-base

license:apache-2.0
7
0

bert-base-multilingual-cased-finetuned-wolof

6
2

distilbert-base-multilingual-cased-masakhaner

6
2

xlm-roberta-base-finetuned-hausa

6
1

xlm-roberta-base-finetuned-luganda

6
1

xlm-roberta-base-finetuned-luo

6
0

Xlm Roberta Base Finetuned Zulu

5
1

xlm-roberta-base-finetuned-igbo

5
0

xlm-roberta-base-finetuned-kinyarwanda

5
0

xlm-roberta-base-finetuned-xhosa

license:apache-2.0
4
1

mt5-small-diacritizer-menyo

license:apache-2.0
4
0

oyo-mt-bert-base

4
0

naija-bert-large

license:apache-2.0
4
0

afro-xlmr-base-114L

license:mit
4
0

bert-base-multilingual-cased-finetuned-igbo

3
1

omowe-t5-small-diacritizer-menyo

3
1

afro-xlmr-large-76L_script

license:mit
3
1

xlm-roberta-large-finetuned-zulu

3
0

oyo-mt-t5-base

license:apache-2.0
3
0

afro-xlmr-large-114L_1epoch

license:apache-2.0
3
0

afro-xlmr-large-114L_2epoch

3
0

xlm-roberta-large-finetuned-somali

2
1

oyo-t5-small

license:apache-2.0
2
1

bert-base-multilingual-cased-finetuned-kinyarwanda

2
0

xlm-roberta-base-finetuned-english

license:apache-2.0
2
0

xlm-roberta-base-finetuned-naija

2
0

oyo-mt-teams-base

license:apache-2.0
2
0

oyo-t5-base

license:apache-2.0
2
0

oyo-t5-tiny-v32k

2
0

oyo-mt-t5-tiny-v16k

license:mit
2
0

oyo-mt-t5-small

license:apache-2.0
2
0

afro-xlmr-large-114L_3epoch

2
0

m2m100_418m-ft-efi-en

2
0

afro-xlmr-base-76L_script

license:mit
1
3

xlm-roberta-base-finetuned-chichewa

license:apache-2.0
1
1

xlm-roberta-base-masakhaner

1
1

bert-base-multilingual-cased-finetuned-luo

1
0

mt5-small-en-pcm

1
0

mt5-small-pcm-en

1
0

xlm-roberta-large-finetuned-igbo

1
0

xlm-roberta-large-finetuned-naija

1
0

xlm-roberta-large-finetuned-english

license:mit
1
0

bloom-560m_am_continual-pretrain_10000samples

1
0

omowe-t5-small-diacritizer-all-und-full

1
0

m2m100_418m-ft-en-efi

1
0

xlm-roberta-base-sadilar-ner

0
2

africa_llama13b_lora_model2

NaNK
llama
0
2

xlm-roberta-base-finetuned-wolof

0
1

xlm-roberta-large-finetuned-luganda

0
1

xlm-roberta-large-finetuned-luo

0
1

afro-xlmr-large-75L

license:mit
0
1