KORMo-Team
KORMo-10B-sft
π Update News - 2025-10-13: Official release of KORMo-10B-sft. --- π‘ About KORMo KORMo-10B is a 10.8B parameter fully open LLM capable of handling both Korean and English. The model, training cod...
KORMo-10B-base
π Update News - 2025-10-13: Official release of KORMo-10B-base (Be aware that it's not an SFT model!!). --- π‘ About KORMo KORMo-10B is a 10.8B parameter fully open LLM capable of handling both Korean and English. The model, training code, and training data are all fully open, allowing anyone to reproduce and extend them. - Model Size: 10.8B parameters - Languages: Korean / English - Training Data: Synthetic data + public datasets (approximately 3T tokens) - License: Apache 2.0 - π Technical Report: π Arxive - π€ Hugging Face: π Model Download - π» GitHub Repository: π Training and Inference Code - π Tutorial: π Instruction Tuning over google colab π Youtube Tutorial | Benchmark | KORMo-10B | smolLM3-3B | olmo2-7B | olmo2-13B | kanana1.5-8B | qwen3-8B | llama3.1-8B | gemma3-4B | gemma3-12B | |:-----------|---------------:|-----------:|---------:|---------:|------------:|--------:|-----------:|---------:|----------:| | πΊπΈ English Benchmarks ||||||||||| | arcchallenge | 58.96 | 55.55 | 59.13 | 61.01 | 56.48 | 63.82 | 54.61 | 53.58 | 63.82 | | arceasy | 85.48 | 83.21 | 85.06 | 86.57 | 82.74 | 87.50 | 84.01 | 82.83 | 87.37 | | boolq | 83.46 | 82.17 | 84.50 | 86.48 | 84.53 | 87.71 | 81.87 | 80.70 | 86.61 | | copa | 93.00 | 91.00 | 92.00 | 93.00 | 88.00 | 92.00 | 93.00 | 89.00 | 95.00 | | gpqamain | 30.13 | 26.79 | 26.34 | 29.24 | 29.24 | 30.13 | 23.44 | 30.13 | 35.71 | | hellaswag | 60.25 | 56.78 | 61.52 | 65.02 | 59.93 | 59.54 | 60.96 | 57.56 | 63.67 | | mmlu | 67.96 | 61.37 | 62.81 | 66.85 | 63.73 | 76.95 | 65.03 | 59.60 | 73.58 | | mmluglobal | 63.44 | 57.52 | 59.88 | 63.99 | 60.21 | 75.05 | 61.30 | 57.23 | 70.23 | | mmlupro | 40.18 | 34.94 | 27.29 | 32.50 | 34.93 | 56.58 | 36.23 | 27.79 | 37.07 | | mmluredux | 69.00 | 62.95 | 63.53 | 68.37 | 65.88 | 78.19 | 65.86 | 60.86 | 75.25 | | openbookqa | 39.00 | 36.40 | 39.00 | 39.60 | 36.80 | 39.20 | 39.00 | 37.00 | 40.20 | | piqa | 81.12 | 78.45 | 80.79 | 82.64 | 80.30 | 79.05 | 80.90 | 79.49 | 82.59 | | socialiqa | 52.81 | 50.72 | 55.89 | 57.57 | 57.01 | 56.96 | 53.12 | 51.84 | 56.45 | | English Avg. | 63.45 | 59.83 | 61.36 | 64.06 | 61.52 | 67.90 | 61.49 | 59.05 | 66.73 | | π°π· Korean Benchmarks ||||||||||| | click | 55.29 | 46.97 | 37.79 | 41.80 | 62.76 | 60.70 | 49.22 | 49.62 | 62.21 | | csatqa | 38.00 | 26.67 | 19.33 | 24.67 | 44.67 | 52.00 | 28.67 | 28.67 | 31.33 | | haerae | 68.29 | 55.82 | 31.62 | 37.58 | 80.75 | 67.19 | 53.25 | 60.68 | 74.34 | | k2eval | 84.89 | 75.23 | 49.54 | 63.43 | 84.72 | 84.72 | 76.62 | 76.39 | 85.42 | | kobest | 75.05 | 69.13 | 57.27 | 59.02 | 81.93 | 80.05 | 70.55 | 69.33 | 77.70 | | kobalt | 22.86 | 15.86 | 11.43 | 13.14 | 26.29 | 26.57 | 17.43 | 15.57 | 23.86 | | kmmlu | 46.48 | 38.52 | 33.05 | 31.24 | 48.86 | 56.93 | 40.75 | 39.84 | 51.60 | | mmluglobal (ko) | 55.16 | 44.15 | 34.00 | 36.95 | 52.65 | 61.95 | 46.34 | 46.33 | 59.68 | | krclinicalqa | 77.32 | 53.97 | 48.33 | 46.22 | 65.84 | 80.00 | 63.54 | 60.00 | 77.22 | | Korean Avg. | 58.15 | 47.37 | 35.82 | 39.34 | 60.94 | 63.35 | 49.60 | 49.60 | 60.37 | | Benchmark | KORMo-10B | smolLM3-3B | olmo2-7B | olmo2-13B | kanana1.5-8B | qwen3-8B | llama3.1-8B | exaone3.5-8B | gemma3-12B | |:----------|---------:|----------:|---------:|---------:|------------:|--------:|------------:|-------------:|-----------:| | MT-Bench (EN) | 8.32 | 7.15 | 7.32 | 7.64 | 8.45 | 8.70 | 6.32 | 8.15 | 8.70 | | KO-MT-Bench (KO) | 8.54 | - | - | - | 8.02 | 8.16 | 4.27 | 8.13 | 8.51 | | LogicKor (KO) | 8.96 | - | - | - | 8.94 | 8.63 | 6.45 | 9.20 | 8.46 | | Average | 8.61 | - | - | - | 8.47 | 8.50 | 5.68 | 8.49 | 8.56 | If you want to enable the thinking mode, simply set `enablethinking=True`: Contact - KyungTae Lim, Professor at KAIST. `[email protected]` Acknowledgments - This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (RS-2025-02653113, High-Performance Research AI Computing Infrastructure Support at the 2 PFLOPS Scale)