dragonkue

5 models • 1 total models in database
Sort by:

BGE-m3-ko

--- language: - ko - en library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10

license:apache-2.0
388,031
63

snowflake-arctic-embed-l-v2.0-ko

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l-v2.0 This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l-v2.0 on the clustered datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search. The Snowflake/snowflake-arctic-embed-l-v2.0 model has been further trained with Korean data to enhance its performance in Korean retrieval tasks. It is a powerful model that achieves state-of-the-art (SOTA) performance across multiple retrieval benchmarks. Model Description - Model Type: Sentence Transformer - Base model: Snowflake/snowflake-arctic-embed-l-v2.0 - Maximum Sequence Length: 8192 tokens - Output Dimensionality: 1024 dimensions - Similarity Function: Cosine Similarity - Training Datasets: - AI Hub Dataset - 행정 문서 대상 기계 독해 - 기계 독해 - 뉴스 기사 기계독해 - 도서 자료 기계독해 - 숫자 연산 기계독해 - 금융 법률 문서 기계독해 - Language: Korean, English - Documentation: Sentence Transformers Documentation - Repository: Sentence Transformers on GitHub - Hugging Face: Sentence Transformers on Hugging Face First install the Sentence Transformers library and xformers library You can use the transformers package to use Snowflake's arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query). - This evaluation references the KURE GitHub repository. (https://github.com/nlpai-lab/KURE) - We conducted an evaluation on all Korean Retrieval Benchmarks registered in MTEB. Korean Retrieval Benchmark - Ko-StrategyQA: A Korean ODQA multi-hop retrieval dataset, translated from StrategyQA. - AutoRAGRetrieval: A Korean document retrieval dataset constructed by parsing PDFs from five domains: finance, public, medical, legal, and commerce. - MIRACLRetrieval: A Korean document retrieval dataset based on Wikipedia. - PublicHealthQA: A retrieval dataset focused on medical and public health domains in Korean. - BelebeleRetrieval: A Korean document retrieval dataset based on FLORES-200. - MrTidyRetrieval: A Wikipedia-based Korean document retrieval dataset. - MultiLongDocRetrieval: A long-document retrieval dataset covering various domains in Korean. - XPQARetrieval: A cross-domain Korean document retrieval dataset. - Achieves state-of-the-art (SOTA) performance across various benchmarks. - For each benchmark, the highest score is highlighted in bold, and the second-highest score is italicized. | Model | Average | MrTidyRetrieval | MIRACLRetrieval | XPQARetrieval | BelebeleRetrieval | PublicHealthQA | AutoRAGRetrieval | Ko-StrategyQA | |:-------------------------------------------------------------------------------------------------|:-------------|:------------------|:------------------|:----------------|:--------------------|:-----------------|:-------------------|:----------------| | dragonkue/snowflake-arctic-embed-l-v2.0-ko | 0.740433 | 0.57121 | 0.66846 | 0.4436 | 0.95177 | 0.83374 | 0.90927 | 0.80498 | | dragonkue/BGE-m3-ko | 0.729993 | 0.60992 | 0.68331 | 0.38131 | 0.95027 | 0.81545 | 0.87379 | 0.7959 | | nlpai-lab/KURE-v1 | 0.727739 | 0.59092 | 0.68157 | 0.38158 | 0.95019 | 0.81925 | 0.87076 | 0.7999 | | BAAI/bge-m3 | 0.724169 | 0.64708 | 0.70146 | 0.36075 | 0.93164 | 0.80412 | 0.83008 | 0.79405 | | Snowflake/snowflake-arctic-embed-l-v2.0 | 0.724104 | 0.59071 | 0.66077 | 0.43018 | 0.9271 | 0.81679 | 0.83863 | 0.80455 | | intfloat/multilingual-e5-large | 0.721607 | 0.64211 | 0.66486 | 0.3571 | 0.94499 | 0.82534 | 0.81337 | 0.80348 | | nlpai-lab/KoE5 | 0.711356 | 0.58411 | 0.62347 | 0.35086 | 0.94251 | 0.83507 | 0.84339 | 0.80008 | | BAAI/bge-multilingual-gemma2 | 0.704274 | 0.47521 | 0.70315 | 0.37446 | 0.95001 | 0.87102 | 0.76535 | 0.79072 | | jinaai/jina-embeddings-v3 | 0.701314 | 0.55759 | 0.63716 | 0.41272 | 0.91203 | 0.83059 | 0.76104 | 0.79807 | | SamilPwC-AXNode-GenAI/PwC-Embeddingexpr | 0.699483 | 0.56656 | 0.63214 | 0.36388 | 0.91669 | 0.83462 | 0.78493 | 0.79756 | | intfloat/multilingual-e5-large-instruct | 0.69837 | 0.52877 | 0.59914 | 0.39712 | 0.936 | 0.84967 | 0.77996 | 0.79793 | | nomic-ai/nomic-embed-text-v2-moe | 0.693773 | 0.53766 | 0.65913 | 0.36871 | 0.93636 | 0.78448 | 0.80682 | 0.76325 | | intfloat/multilingual-e5-base | 0.689429 | 0.58082 | 0.6227 | 0.3607 | 0.92868 | 0.77203 | 0.79752 | 0.76355 | | intfloat/e5-mistral-7b-instruct | 0.683734 | 0.52444 | 0.58709 | 0.39159 | 0.92403 | 0.88733 | 0.67849 | 0.79317 | | Alibaba-NLP/gte-Qwen2-7B-instruct | 0.680323 | 0.46571 | 0.53375 | 0.37866 | 0.94808 | 0.85844 | 0.76682 | 0.8108 | | Qwen/Qwen3-Embedding-0.6B | 0.676200 | 0.48987 | 0.60021 | 0.33440 | 0.91601 | 0.80290 | 0.82405 | 0.76596 | | Alibaba-NLP/gte-multilingual-base | 0.663766 | 0.56464 | 0.62697 | 0.30702 | 0.8796 | 0.74584 | 0.77108 | 0.75121 | | openai/text-embedding-3-large | 0.662239 | 0.44728 | 0.56248 | 0.37423 | 0.89451 | 0.85617 | 0.76466 | 0.73634 | | upskyy/bge-m3-korean | 0.6567 | 0.55011 | 0.59892 | 0.31695 | 0.8731 | 0.77559 | 0.72946 | 0.75277 | | Salesforce/SFR-Embedding-2R | 0.65591 | 0.40347 | 0.55798 | 0.37371 | 0.91747 | 0.8605 | 0.70782 | 0.77042 | | ibm-granite/granite-embedding-278m-multilingual | 0.641935 | nan | 0.59216 | 0.23058 | 0.83231 | 0.77668 | 0.70226 | 0.71762 | | jhgan/ko-sroberta-multitask | 0.526301 | 0.29475 | 0.36698 | 0.27961 | 0.81636 | 0.69212 | 0.58332 | 0.65097 | This model is designed to handle various retrieval scenarios that are not directly measured in benchmarks: 1. Supports phrase-based queries in addition to full-sentence queries. Example: "What products does Samsung sell?" or "Samsung's products" 2. Trained to handle diverse query formats, regardless of phrasing variations. Example: "Tell me about Samsung.", "I'm curious about Samsung.", "What is Samsung?" 3. Optimized for Markdown table search, allowing retrieval of answers embedded within tables when present in documents. - Samples within the same batch are clustered together. - Uses efficient embedding formation for clustering by truncating embeddings from the Snowflake/snowflake-arctic-embed-l-v2.0 model to 256 dimensions. - The clustering approach is inspired by the findings in the following papers: - Embedding And Clustering Your Data Can Improve Contrastive Pretraining - CONTEXTUAL DOCUMENT EMBEDDINGS - The Arctic-Embed 2.0: Multilingual Retrieval Without Compromise paper states: "While models like mE5, mGTE, and BGE-M3 excel on MIRACL, their performance on CLEF is notably weaker compared to ours and closed-source offerings, suggesting the potential of overfitting to MIRACL or its Wikipedia-based domain." - Based on my own experience, Snowflake/snowflake-arctic-embed-l-v2.0 has consistently outperformed BGE-M3 in different domains, further validating this observation. To prevent excessive GPU usage costs, the model was trained with a maximum sequence length of 1300 tokens. As a result, its performance may degrade on benchmarks like MultiLongDocRetrieval (MLDR). The previous model, BGE-m3-ko, was trained with a token length of 1024, which imposed limitations on its MLDR benchmark performance. In the case of snowflake-arctic-embed-l-v2.0-ko, if the document length exceeds 1300 tokens or approximately 2500 characters, it is recommended to consider the following models instead. | Model | MultiLongDocRetrieval | |:-------------------------------------------------------------------------------------------------|------------------------:| | Alibaba-NLP/gte-multilingual-base/Alibaba-NLP/gte-multilingual-base | 0.48402 | | nlpai-lab/KURE-v1/nlpai-labKURE-v1 | 0.47528 | | dragonkue/snowflake-arctic-embed-l-v2.0-ko | 0.4459 | | BAAI/bge-m3/BAAIbge-m3 | 0.43011 | | Snowflake/snowflake-arctic-embed-l-v2.0 | 0.40401 | | dragonkue/BGE-m3-ko/dragonkueBGE-m3-ko | 0.40135 | | openai/text-embedding-3-large | 0.31108 | | BAAI/bge-multilingual-gemma2 | 0.31021 | | nlpai-lab/KoE5 | 0.30869 | | jinaai/jina-embeddings-v3/jinaaijina-embeddings-v3 | 0.30512 | | Alibaba-NLP/gte-Qwen2-7B-instruct/Alibaba-NLPgte-Qwen2-7B-instruct | 0.30313 | | intfloat/multilingual-e5-large-instruct/intfloatmultilingual-e5-large-instruct | 0.27973 | | nomic-ai/nomic-embed-text-v2-moe | 0.27135 | | intfloat/e5-mistral-7b-instruct/intfloate5-mistral-7b-instruct | 0.2583 | | intfloat/multilingual-e5-large/intfloatmultilingual-e5-large | 0.24596 | | Salesforce/SFR-Embedding-2R/SalesforceSFR-Embedding-2R | 0.24346 | | intfloat/multilingual-e5-base/intfloatmultilingual-e5-base | 0.23766 | | upskyy/bge-m3-korean/upskyybge-m3-korean | 0.21968 | | ibm-granite/granite-embedding-278m-multilingual/ibm-granitegranite-embedding-278m-multilingual | 0.20781 | | jhgan/ko-sroberta-multitask/jhganko-sroberta-multitask | 0.20416 | Training Hyperparameters Non-Default Hyperparameters - `evalstrategy`: steps - `perdevicetrainbatchsize`: 20000 - `perdeviceevalbatchsize`: 4096 - `learningrate`: 2e-05 - `numtrainepochs`: 2 - `lrschedulertype`: warmupstabledecay - `lrschedulerkwargs`: {'numdecaysteps': 160} - `warmupratio`: 0.05 - `bf16`: True - `batchsampler`: noduplicates - `overwriteoutputdir`: False - `dopredict`: False - `evalstrategy`: steps - `predictionlossonly`: True - `perdevicetrainbatchsize`: 10000 - `perdeviceevalbatchsize`: 4096 - `pergputrainbatchsize`: None - `pergpuevalbatchsize`: None - `gradientaccumulationsteps`: 1 - `evalaccumulationsteps`: None - `torchemptycachesteps`: None - `learningrate`: 2e-05 - `weightdecay`: 0.0 - `adambeta1`: 0.9 - `adambeta2`: 0.999 - `adamepsilon`: 1e-08 - `maxgradnorm`: 1.0 - `numtrainepochs`: 2 - `maxsteps`: -1 - `lrschedulertype`: warmupstabledecay - `lrschedulerkwargs`: {'numdecaysteps': 160} - `warmupratio`: 0.05 - `warmupsteps`: 0 - `loglevel`: passive - `loglevelreplica`: warning - `logoneachnode`: True - `loggingnaninffilter`: True - `savesafetensors`: True - `saveoneachnode`: False - `saveonlymodel`: False - `restorecallbackstatesfromcheckpoint`: False - `nocuda`: False - `usecpu`: False - `usempsdevice`: False - `seed`: 42 - `dataseed`: None - `jitmodeeval`: False - `useipex`: False - `bf16`: True - `fp16`: False - `fp16optlevel`: O1 - `halfprecisionbackend`: auto - `bf16fulleval`: False - `fp16fulleval`: False - `tf32`: None - `localrank`: 0 - `ddpbackend`: None - `tpunumcores`: None - `tpumetricsdebug`: False - `debug`: [] - `dataloaderdroplast`: True - `dataloadernumworkers`: 0 - `dataloaderprefetchfactor`: None - `pastindex`: -1 - `disabletqdm`: False - `removeunusedcolumns`: True - `labelnames`: None - `loadbestmodelatend`: False - `ignoredataskip`: False - `fsdp`: [] - `fsdpminnumparams`: 0 - `fsdpconfig`: {'minnumparams': 0, 'xla': False, 'xlafsdpv2': False, 'xlafsdpgradckpt': False} - `fsdptransformerlayerclstowrap`: None - `acceleratorconfig`: {'splitbatches': False, 'dispatchbatches': None, 'evenbatches': True, 'useseedablesampler': True, 'nonblocking': False, 'gradientaccumulationkwargs': None} - `deepspeed`: None - `labelsmoothingfactor`: 0.0 - `optim`: adamwtorch - `optimargs`: None - `adafactor`: False - `groupbylength`: False - `lengthcolumnname`: length - `ddpfindunusedparameters`: None - `ddpbucketcapmb`: None - `ddpbroadcastbuffers`: False - `dataloaderpinmemory`: True - `dataloaderpersistentworkers`: False - `skipmemorymetrics`: True - `uselegacypredictionloop`: False - `pushtohub`: False - `resumefromcheckpoint`: None - `hubmodelid`: None - `hubstrategy`: everysave - `hubprivaterepo`: None - `hubalwayspush`: False - `gradientcheckpointing`: False - `gradientcheckpointingkwargs`: None - `includeinputsformetrics`: False - `includeformetrics`: [] - `evaldoconcatbatches`: True - `fp16backend`: auto - `pushtohubmodelid`: None - `pushtohuborganization`: None - `mpparameters`: - `autofindbatchsize`: False - `fulldeterminism`: False - `torchdynamo`: None - `rayscope`: last - `ddptimeout`: 1800 - `torchcompile`: False - `torchcompilebackend`: None - `torchcompilemode`: None - `dispatchbatches`: None - `splitbatches`: None - `includetokenspersecond`: False - `includenuminputtokensseen`: False - `neftunenoisealpha`: None - `optimtargetmodules`: None - `batchevalmetrics`: False - `evalonstart`: False - `useligerkernel`: False - `evalusegatherobject`: False - `averagetokensacrossdevices`: False - `prompts`: None - `batchsampler`: noduplicates - `multidatasetbatchsampler`: proportional Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.4.1 - Transformers: 4.49.0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.4.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 Embedding And Clustering Your Data Can Improve Contrastive Pretraining Arctic is licensed under the Apache-2. The released models can be used for commercial purposes free of charge.

license:apache-2.0
46,279
39

bge-reranker-v2-m3-ko

NaNK
license:apache-2.0
5,236
15

multilingual-e5-small-ko-v2

license:apache-2.0
3,029
2

multilingual-e5-small-ko

license:apache-2.0
2,208
8