deepset
roberta-base-squad2
--- language: en license: cc-by-4.0 datasets: - squad_v2 model-index: - name: deepset/roberta-base-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 79.9309 name: Exact Match verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhNjg5YzNiZGQ1YTIyYTAwZGUwOWEzZTRiYzdjM2QzYjA3ZTUxNDM1NjE1MTUyMjE1MGY1YzEzMjRjYzVjYiIsInZlcnNpb24iOjF9.EH5JJo
gelectra-large
--- language: de license: mit datasets: - wikipedia - OPUS - OpenLegalData - oscar ---
bert-large-uncased-whole-word-masking-squad2
--- language: en license: cc-by-4.0 datasets: - squad_v2 model-index: - name: deepset/bert-large-uncased-whole-word-masking-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 80.8846 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2E5ZGNkY2ExZWViZGEwNWE3OGRmMWM2ZmE4ZDU4ZDQ1OGM3ZWE0NTVmZjFmYmZjZmJmNjJmYTc3NTM3OTk3OSIsI
deberta-v3-base-injection
xlm-roberta-base-squad2
bert-base-cased-squad2
tinyroberta-squad2
roberta-base-squad2-distilled
gbert-base
gbert-large
xlm-roberta-large-squad2
Multilingual XLM-RoBERTa large for Extractive QA on various languages Overview Language model: xlm-roberta-large Language: Multilingual Downstream-task: Extractive QA Training data: SQuAD 2.0 Eval data: SQuAD dev set - German MLQA - German XQuAD Training run: MLFlow link Code: See an example extractive QA pipeline built with Haystack Infrastructure: 4x Tesla v100 In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with Haystack: For a complete example with an extractive question answering pipeline that scales over many documents, check out the corresponding Haystack tutorial. Performance Evaluated on the SQuAD 2.0 English dev set with the official eval script. Evaluated on German MLQA: test-context-de-question-de.json In Haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack: Authors Branden Chan: [email protected] Timo Möller: [email protected] Malte Pietsch: [email protected] Tanay Soni: [email protected] deepset is the company behind the production-ready open-source AI framework Haystack. Some of our other work: - Distilled roberta-base-squad2 (aka "tinyroberta-squad2") - German BERT, GermanQuAD and GermanDPR, German embedding model - deepset Cloud, deepset Studio For more info on Haystack, visit our GitHub repo and Documentation . Twitter | LinkedIn | Discord | GitHub Discussions | Website | YouTube
bert-medium-squad2-distilled
bert-base-uncased-squad2
Overview Language model: bert-base-uncased Language: English Downstream-task: Extractive QA Training data: SQuAD 2.0 Eval data: SQuAD 2.0 Code: See an example extractive QA pipeline built with Haystack Infrastructure: 1x Tesla v100 In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with Haystack: For a complete example with an extractive question answering pipeline that scales over many documents, check out the corresponding Haystack tutorial. Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` deepset is the company behind the production-ready open-source AI framework Haystack. Some of our other work: - Distilled roberta-base-squad2 (aka "tinyroberta-squad2") - German BERT, GermanQuAD and GermanDPR, German embedding model - deepset Cloud - deepset Studio For more info on Haystack, visit our GitHub repo and Documentation . Twitter | LinkedIn | Discord | GitHub Discussions | Website | YouTube