jhgan

6 models • 1 total models in database
Sort by:

ko-sroberta-multitask

--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: ko ---

307,730
131

ko-sbert-nli

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. - Cosine Pearson: 82.24 - Cosine Spearman: 83.16 - Euclidean Pearson: 82.19 - Euclidean Spearman: 82.31 - Manhattan Pearson: 82.18 - Manhattan Spearman: 82.30 - Dot Pearson: 79.30 - Dot Spearman: 78.78 Training The model was trained with the parameters: `sentencetransformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters: `sentencetransformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: - Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289 - Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019) - Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).

111,663
28

ko-sbert-sts

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. - Cosine Pearson: 81.55 - Cosine Spearman: 81.23 - Euclidean Pearson: 79.94 - Euclidean Spearman: 79.79 - Manhattan Pearson: 79.90 - Manhattan Spearman: 79.75 - Dot Pearson: 76.02 - Dot Spearman: 75.31 Training The model was trained with the parameters: `torch.utils.data.dataloader.DataLoader` of length 719 with parameters: `sentencetransformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` - Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289 - Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019) - Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)

70,899
12

ko-sbert-multitask

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. KorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다. - Cosine Pearson: 84.13 - Cosine Spearman: 84.71 - Euclidean Pearson: 82.42 - Euclidean Spearman: 82.66 - Manhattan Pearson: 81.41 - Manhattan Spearman: 81.69 - Dot Pearson: 80.05 - Dot Spearman: 79.69 Training The model was trained with the parameters: `sentencetransformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters: `sentencetransformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: `torch.utils.data.dataloader.DataLoader` of length 719 with parameters: `sentencetransformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` - Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289 - Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019) - Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).

34,620
20

ko-sroberta-nli

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. - Cosine Pearson: 82.83 - Cosine Spearman: 83.85 - Euclidean Pearson: 82.87 - Euclidean Spearman: 83.29 - Manhattan Pearson: 82.88 - Manhattan Spearman: 83.28 - Dot Pearson: 80.34 - Dot Spearman: 79.69 Training The model was trained with the parameters: `sentencetransformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters: `sentencetransformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: - Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289 - Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019) - Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).

2,030
9

ko-sroberta-sts

347
0