cross-encoder

31 models • 1 total models in database
Sort by:

ms-marco-MiniLM-L6-v2

This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See SBERT.net Retrieve & Re-rank for more details. The training code is available here: SBERT.net Training MS Marco The usage is easy when you have SentenceTransformers installed. Then you can use the pre-trained models like this: Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | Version 2 models | | | | cross-encoder/ms-marco-TinyBERT-L2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L12-v2 | 74.31 | 39.02 | 960 | Version 1 models | | | | cross-encoder/ms-marco-TinyBERT-L2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | Other models | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-marginmse-T2-msmarco | 72.82 | 37.88 | 720

5,878,465
161

ms-marco-TinyBERT-L2-v2

--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - nreimers/BERT-Tiny_L-2_H-128_A-2 pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---

NaNK
license:apache-2.0
1,690,869
34

ms-marco-MiniLM-L2-v2

--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - cross-encoder/ms-marco-MiniLM-L12-v2 pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---

NaNK
license:apache-2.0
1,112,571
12

ms-marco-MiniLM-L4-v2

--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - cross-encoder/ms-marco-MiniLM-L12-v2 pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---

NaNK
license:apache-2.0
1,063,229
14

ms-marco-MiniLM-L12-v2

--- license: apache-2.0 datasets: - sentence-transformers/msmarco language: - en base_model: - microsoft/MiniLM-L12-H384-uncased pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---

license:apache-2.0
698,158
84

mmarco-mMiniLMv2-L12-H384-v1

--- license: apache-2.0 language: - en - ar - zh - nl - fr - de - hi - in - it - ja - pt - ru - es - vi - multilingual datasets: - unicamp-dl/mmarco base_model: - nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---

license:apache-2.0
295,399
61

stsb-roberta-base

--- license: apache-2.0 datasets: - sentence-transformers/stsb language: - en base_model: - FacebookAI/roberta-base pipeline_tag: text-ranking library_name: sentence-transformers tags: - transformers ---

license:apache-2.0
253,023
4

qnli-distilroberta-base

license:apache-2.0
85,074
0

ms-marco-TinyBERT-L2

This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See SBERT.net Retrieve & Re-rank for more details. The training code is available here: SBERT.net Training MS Marco The usage is easy when you have SentenceTransformers installed. Then you can use the pre-trained models like this: Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | Version 2 models | | | | cross-encoder/ms-marco-TinyBERT-L2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L12-v2 | 74.31 | 39.02 | 960 | Version 1 models | | | | cross-encoder/ms-marco-TinyBERT-L2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | Other models | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-marginmse-T2-msmarco | 72.82 | 37.88 | 720

NaNK
license:apache-2.0
60,220
19

stsb-distilroberta-base

license:apache-2.0
50,052
6

nli-MiniLM2-L6-H768

license:apache-2.0
48,744
11

msmarco-MiniLM-L12-en-de-v1

NaNK
license:apache-2.0
37,972
5

stsb-roberta-large

license:apache-2.0
26,632
14

nli-deberta-v3-base

license:apache-2.0
25,013
37

nli-deberta-v3-xsmall

license:apache-2.0
18,270
7

msmarco-MiniLM-L6-en-de-v1

NaNK
license:apache-2.0
17,092
14

qnli-electra-base

Cross-Encoder for SQuAD (QNLI) This model was trained using SentenceTransformers Cross-Encoder class. Training Data Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task. Performance For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrainedcross-encoders.html]. Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library):

license:apache-2.0
15,409
4

ms-marco-electra-base

license:apache-2.0
13,752
7

quora-distilroberta-base

license:apache-2.0
12,962
1

nli-deberta-v3-small

license:apache-2.0
12,823
12

stsb-TinyBERT-L4

license:apache-2.0
11,735
6

quora-roberta-base

license:apache-2.0
10,225
1

nli-roberta-base

license:apache-2.0
6,919
14

nli-deberta-v3-large

license:apache-2.0
6,534
36

nli-distilroberta-base

license:apache-2.0
5,914
24

ms-marco-TinyBERT-L6

NaNK
license:apache-2.0
5,528
2

ms-marco-TinyBERT-L4

NaNK
license:apache-2.0
720
1

nli-deberta-base

license:apache-2.0
481
17

quora-roberta-large

license:apache-2.0
175
4

monoelectra-large

NaNK
license:apache-2.0
117
3

monoelectra-base

NaNK
license:apache-2.0
34
7