cross-encoder
ms-marco-MiniLM-L6-v2
This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See SBERT.net Retrieve & Re-rank for more details. The training code is available here: SBERT.net Training MS Marco The usage is easy when you have SentenceTransformers installed. Then you can use the pre-trained models like this: Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | Version 2 models | | | | cross-encoder/ms-marco-TinyBERT-L2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L12-v2 | 74.31 | 39.02 | 960 | Version 1 models | | | | cross-encoder/ms-marco-TinyBERT-L2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | Other models | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-marginmse-T2-msmarco | 72.82 | 37.88 | 720
ms-marco-TinyBERT-L2-v2
--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - nreimers/BERT-Tiny_L-2_H-128_A-2 pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---
ms-marco-MiniLM-L2-v2
--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - cross-encoder/ms-marco-MiniLM-L12-v2 pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---
ms-marco-MiniLM-L4-v2
--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - cross-encoder/ms-marco-MiniLM-L12-v2 pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---
ms-marco-MiniLM-L12-v2
--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - microsoft/MiniLM-L12-H384-uncased pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---
mmarco-mMiniLMv2-L12-H384-v1
--- license: apache-2.0 language: - en - ar - zh - nl - fr - de - hi - in - it - ja - pt - ru - es - vi - multilingual datasets: - unicamp-dl/mmarco base_model: - nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---
stsb-roberta-base
--- license: apache-2.0 datasets: - sentence-transformers/stsb language: - en base_model: - FacebookAI/roberta-base pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---
qnli-distilroberta-base
ms-marco-TinyBERT-L2
This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See SBERT.net Retrieve & Re-rank for more details. The training code is available here: SBERT.net Training MS Marco The usage is easy when you have SentenceTransformers installed. Then you can use the pre-trained models like this: Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | Version 2 models | | | | cross-encoder/ms-marco-TinyBERT-L2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L12-v2 | 74.31 | 39.02 | 960 | Version 1 models | | | | cross-encoder/ms-marco-TinyBERT-L2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | Other models | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-marginmse-T2-msmarco | 72.82 | 37.88 | 720
stsb-distilroberta-base
nli-MiniLM2-L6-H768
msmarco-MiniLM-L12-en-de-v1
stsb-roberta-large
nli-deberta-v3-base
nli-deberta-v3-xsmall
msmarco-MiniLM-L6-en-de-v1
qnli-electra-base
Cross-Encoder for SQuAD (QNLI) This model was trained using SentenceTransformers Cross-Encoder class. Training Data Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task. Performance For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrainedcross-encoders.html]. Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library):