CAMeL-Lab

54 models • 1 total models in database
Sort by:

bert-base-arabic-camelbert-mix-sentiment

--- language: - ar license: apache-2.0 widget: - text: "أنا بخير" ---

license:apache-2.0
509,253
7

bert-base-arabic-camelbert-da-sentiment

CAMeLBERT-DA SA Model Model description CAMeLBERT-DA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model. For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. Intended uses You can use the CAMeLBERT-DA SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. How to use To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.

license:apache-2.0
109,963
49

bert-base-arabic-camelbert-msa-ner

CAMeLBERT MSA NER Model Model description CAMeLBERT MSA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model. For the fine-tuning, we used the ANERcorp dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models. " Our fine-tuning code can be found here. Intended uses You can use the CAMeLBERT MSA NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. How to use To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.

license:apache-2.0
25,544
6

bert-base-arabic-camelbert-mix-ner

license:apache-2.0
22,089
14

bert-base-arabic-camelbert-msa

license:apache-2.0
10,423
10

bert-base-arabic-camelbert-mix

license:apache-2.0
2,428
18

camelbert-msa-zaebuc-ged-43

license:mit
898
0

bert-base-arabic-camelbert-msa-sentiment

license:apache-2.0
815
7

bert-base-arabic-camelbert-da

license:apache-2.0
633
28

arabart-qalb14-gec-ged-13

license:mit
377
3

bert-base-arabic-camelbert-ca

license:apache-2.0
333
12

camelbert-msa-qalb14-ged-13

license:mit
305
1

bert-base-arabic-camelbert-msa-sixteenth

license:apache-2.0
120
4

bert-base-arabic-camelbert-ca-poetry

license:apache-2.0
109
4

arabart-qalb15-gec-ged-13

license:mit
106
2

bert-base-arabic-camelbert-da-ner

license:apache-2.0
100
0

bert-base-arabic-camelbert-msa-pos-msa

license:apache-2.0
89
0

bert-base-arabic-camelbert-ca-sentiment

license:apache-2.0
87
3

text-editing-qalb14-nopnx

NaNK
license:mit
85
1

bert-base-arabic-camelbert-mix-did-madar-corpus26

license:apache-2.0
83
4

Bert Base Arabic Camelbert Mix Did Madar Corpus6

CAMeLBERT-Mix DID MADAR Corpus6 Model Model description CAMeLBERT-Mix DID MADAR Corpus6 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model. For the fine-tuning, we used the MADAR Corpus 6 dataset, which includes 6 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. Intended uses You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline. This model will also be available in CAMeL Tools soon. How to use To use the model with a transformers pipeline: Note: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models Citation

license:apache-2.0
69
1

camelbert-msa-zaebuc-ged-13

license:mit
54
3

camelbert-msa-qalb15-ged-13

license:mit
39
1

bert-base-arabic-camelbert-mix-pos-msa

license:apache-2.0
36
1

text-editing-qalb14-pnx

NaNK
license:mit
36
1

bert-base-arabic-camelbert-ca-pos-egy

license:apache-2.0
31
3

text-editing-coda

Model Description `CAMeL-Lab/text-editing-coda` is a text editing model tailored for grammatical error correction (GEC) in dialectal Arabic (DA). The model is based on AraBERTv02, which we fine-tuned using the MADAR CODA corpus. This model was introduced in our ACL 2025 paper, Enhancing Text Editing for Grammatical Error Correction: Arabic as a Case Study, where we refer to it as SWEET (Subword Edit Error Tagger). It achieved SOTA performance on the MADAR CODA dataset. Details about the training procedure, data preprocessing, and hyperparameters are available in the paper. The fine-tuning code and associated resources are publicly available on our GitHub repository: https://github.com/CAMeL-Lab/text-editing. Intended uses To use the `CAMeL-Lab/text-editing-coda` model, you must clone our text editing GitHub repository and follow the installation requirements. We used this `SWEET` model to report results on the MADAR CODA dev and test sets in our paper. How to use Clone our text editing GitHub repository and follow the installation requirements

NaNK
license:mit
31
1

bert-base-arabic-camelbert-msa-did-madar-twitter5

license:apache-2.0
29
3

bert-base-arabic-camelbert-da-pos-msa

license:apache-2.0
29
1

bert-base-arabic-camelbert-ca-pos-msa

license:apache-2.0
23
0

text-editing-zaebuc-pnx

Model Description `CAMeL-Lab/text-editing-zaebuc-pnx` is a text editing model tailored for grammatical error correction (GEC) in Modern Standard Arabic (MSA). The model is based on AraBERTv02, which we fine-tuned using the ZAEBUC dataset. This model was introduced in our ACL 2025 paper, Enhancing Text Editing for Grammatical Error Correction: Arabic as a Case Study, where we refer to it as SWEET (Subword Edit Error Tagger). The model was fine-tuned to fix punctuation (i.e., Pnx) errors. Details about the training procedure, data preprocessing, and hyperparameters are available in the paper. The fine-tuning code and associated resources are publicly available on our GitHub repository: https://github.com/CAMeL-Lab/text-editing. Intended uses To use the `CAMeL-Lab/text-editing-zaebuc-pnx` model, you must clone our text editing GitHub repository and follow the installation requirements. We used this SWEET Pnx model to report results on the ZAEBUC dev and test sets in our paper. This model is intended to be used with SWEET NoPnx (`CAMeL-Lab/text-editing-zaebuc-nopnx`) model. How to use Clone our text editing GitHub repository and follow the installation requirements

NaNK
license:mit
21
1

text-editing-zaebuc-nopnx

Model Description `CAMeL-Lab/text-editing-zaebuc-pnx` is a text editing model tailored for grammatical error correction (GEC) in Modern Standard Arabic (MSA). The model is based on AraBERTv02, which we fine-tuned using the ZAEBUC dataset. This model was introduced in our ACL 2025 paper, Enhancing Text Editing for Grammatical Error Correction: Arabic as a Case Study, where we refer to it as SWEET (Subword Edit Error Tagger). The model was fine-tuned to fix non-punctuation (i.e., NoPnx) errors. Details about the training procedure, data preprocessing, and hyperparameters are available in the paper. The fine-tuning code and associated resources are publicly available on our GitHub repository: https://github.com/CAMeL-Lab/text-editing. Intended uses To use the `CAMeL-Lab/text-editing-zaebuc-nopnx` model, you must clone our text editing GitHub repository and follow the installation requirements. We used this SWEET NoPnx model to report results on the ZAEBUC dev and test sets in our paper. This model is intended to be used with SWEET Pnx (`CAMeL-Lab/text-editing-zaebuc-pnx`) model. How to use Clone our text editing GitHub repository and follow the installation requirements

NaNK
license:mit
19
1

readability-arabertv2-d3tok-CE

Model description AraBERTv2+D3Tok+CE is a readability assessment model that was built by fine-tuning the AraBERTv2 model with cross-entropy loss (CE). For the fine-tuning, we used the D3Tok input variant from BAREC-Corpus-v1.0. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment." Intended uses You can use the AraBERTv2+D3Tok+CE model as part of the transformers pipeline. You need to preprocess your text into the D3Tok input variant using the preprocessing step here.

NaNK
license:mit
19
1

readability-arabertv02-word-CE

Model description AraBERTv02+Word+CE is a readability assessment model that was built by fine-tuning the AraBERTv02 model with cross-entropy loss (CE). For the fine-tuning, we used the Word input variant from BAREC-Corpus-v1.0. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment." Intended uses You can use the AraBERTv02+Word+CE model as part of the transformers pipeline. How to use To use the model with a transformers pipeline:

NaNK
license:mit
17
1

bert-base-arabic-camelbert-msa-quarter

license:apache-2.0
13
2

bert-base-arabic-camelbert-mix-pos-glf

license:apache-2.0
12
1

bert-base-arabic-camelbert-mix-pos-egy

license:apache-2.0
10
3

arat5-coda

license:mit
8
1

bert-base-arabic-camelbert-msa-eighth

license:apache-2.0
6
2

bert-base-arabic-camelbert-msa-did-nadi

license:apache-2.0
5
0

readability-camelbert-word-CE

Model description CAMeLBERT+Word+CE is a readability assessment model that was built by fine-tuning the CAMeLBERT-msa model with cross-entropy loss (CE). For the fine-tuning, we used the Word input variant from BAREC-Corpus-v1.0. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment." Intended uses You can use the CAMeLBERT+Word+CE model as part of the transformers pipeline. How to use To use the model with a transformers pipeline:

NaNK
license:mit
5
0

bert-base-arabic-camelbert-ca-ner

license:apache-2.0
4
2

bert-base-arabic-camelbert-da-poetry

license:apache-2.0
4
0

bert-base-arabic-camelbert-msa-half

license:apache-2.0
3
2

bert-base-arabic-camelbert-msa-poetry

license:apache-2.0
2
1

readability-arabertv2-d3tok-reg

Model description AraBERTv2+D3Tok+Reg is a readability assessment model that was built by fine-tuning the AraBERTv2 model with Mean Squared Error loss (Reg). For the fine-tuning, we used the D3Tok input variant from BAREC-Corpus-v1.0. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment." Intended uses You can use the AraBERTv2+D3Tok+Reg model as part of the transformers pipeline. You need to preprocess your text into the D3Tok input variant using the preprocessing step here.

NaNK
license:mit
2
1

bert-base-arabic-camelbert-da-pos-glf

license:apache-2.0
2
0

bert-base-arabic-camelbert-mix-did-nadi

license:apache-2.0
2
0

bert-base-arabic-camelbert-ca-pos-glf

license:apache-2.0
1
1

bert-base-arabic-camelbert-msa-pos-egy

license:apache-2.0
1
0

arabart-zaebuc-gec-ged-13

license:mit
0
2

bert-base-arabic-camelbert-da-pos-egy

license:apache-2.0
0
1

camelbert-catib-parser

license:mit
0
1

arat5-coda-did

license:mit
0
1