This model is a fine-tuned version of BAAI/bge-small-en-v1.5 on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1245 - Precision: 0.8438 - Recall: 0.8853 - F1: 0.8641 - Accuracy: 0.9744
The following hyperparameters were used during training: - learningrate: 2e-05 - trainbatchsize: 16 - evalbatchsize: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMWTORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: linear - numepochs: 3
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4584 | 1.0 | 625 | 0.2025 | 0.7226 | 0.7822 | 0.7512 | 0.9565 | | 0.1742 | 2.0 | 1250 | 0.1383 | 0.8399 | 0.8790 | 0.8590 | 0.9728 | | 0.1289 | 3.0 | 1875 | 0.1245 | 0.8438 | 0.8853 | 0.8641 | 0.9744 |
- Transformers 4.53.3 - Pytorch 2.6.0+cu124 - Datasets 4.1.1 - Tokenizers 0.21.2