Babelscape

13 models • 1 total models in database
Sort by:

wikineural-multilingual-ner

--- annotations_creators: - machine-generated language_creators: - machine-generated widget: - text: My name is Wolfgang and I live in Berlin. - text: George Washington went to Washington. - text: Mi nombre es Sarah y vivo en Londres. - text: Меня зовут Симона, и я живу в Риме. tags: - named-entity-recognition - sequence-tagger-model datasets: - Babelscape/wikineural language: - de - en - es - fr - it - nl - pl - pt - ru - multilingual license: - cc-by-nc-sa-4.0 pretty_name: wikineural-dataset s

license:cc-by-nc-sa-4.0
266,966
156

rebel-large

license:cc-by-nc-sa-4.0
33,820
230

mrebel-large

license:cc-by-nc-sa-4.0
996
75

t5-base-summarization-claim-extractor

Model Name: T5-base-summarization-claim-extractor Authors: Alessandro Scirè, Karim Ghonim, and Roberto Navigli Contact: [email protected], [email protected] Language: English Primary Use: Extraction of atomic claims from a summary The T5-base-summarization-claim-extractor is a model developed for the task of extracting atomic claims from summaries. The model is based on the T5 architecture which is then fine-tuned specifically for claim extraction. This model was introduced as part of the research presented in the paper "FENICE: Factuality Evaluation of summarization based on Natural Language Inference and Claim Extraction" by Alessandro Scirè, Karim Ghonim, and Roberto Navigli. FENICE leverages Natural Language Inference (NLI) and Claim Extraction to evaluate the factuality of summaries. ArXiv version. - Extract atomic claims from summaries. - Serve as a component in pipelines for factuality evaluation of summaries. Note: The model outputs the claims in a single string. Kindly remember to split the string into sentences in order to retrieve the singular claims. Training For details regarding the training process, please checkout our paper(https://aclanthology.org/2024.findings-acl.841.pdf) (section 4.1). | Model | easiness P | easiness R | easiness F1 | |:-------------------------------------:|:--------------------:|:--------------------:|:---------------------:| | GPT-3.5 | 80.1 | 70.9 | 74.9 | | t5-base-summarization-claim-extractor | 79.2 | 68.8 | 73.4 | Table 1: Easiness Precision (easiness P ), Recall (easiness R ), and F1 score (easiness F1 ) results for the LLM-based claim extractor, namely GPT-3.5, and t5-base-summarization-claim-extractor, assessed on ROSE (Liu et al., 2023b). Further details on the model's performance and the metrics used can be found in the paper (section 4.1). For more details about FENICE, check out the GitHub repository: Babelscape/FENICE If you use this model in your work, please cite the following paper: - The model is specifically designed for extracting claims from summaries and may not perform well on other types of texts. - The model is currently available only in English and may not generalize well to other languages. Users should be aware that while this model extracts claims that can be evaluated for factuality, it does not determine the truthfulness of those claims. Therefore, it should be used in conjunction with other tools or human judgment when evaluating the reliability of summaries. This work was made possible thanks to the support of Babelscape and Sapienza NLP.

license:cc-by-nc-sa-4.0
712
13

cner-base

323
6

mrebel-base

license:cc-by-nc-sa-4.0
86
7

mrebel-large-32

license:cc-by-nc-sa-4.0
36
7

wsl-reader-deberta-v3-base

NaNK
license:cc-by-nc-sa-4.0
8
4

mdeberta-v3-base-triplet-critic-xnli

6
10

wsl-retriever-e5-base-v2

NaNK
license:cc-by-nc-sa-4.0
5
3

wsl-base

license:cc-by-nc-sa-4.0
4
3

wsl-retriever-e5-base-v2-wordnet-index

license:cc-by-nc-sa-4.0
3
5

FENICE

license:cc-by-nc-sa-4.0
0
6