rishabhjain16
whisper_large_v2_to_myst55h
Whisper Medium En To Myst Pf
This model is a fine-tuned version of openai/whisper-medium.en on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4522 - Wer: 10.7946 The following hyperparameters were used during training: - learningrate: 1e-05 - trainbatchsize: 32 - evalbatchsize: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lrschedulertype: linear - lrschedulerwarmupsteps: 500 - trainingsteps: 4000 - mixedprecisiontraining: Native AMP | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2652 | 0.12 | 500 | 0.3042 | 11.2277 | | 0.2102 | 1.11 | 1000 | 0.2824 | 10.9156 | | 0.1913 | 2.1 | 1500 | 0.2924 | 11.2366 | | 0.0249 | 3.09 | 2000 | 0.3386 | 10.6246 | | 0.031 | 4.07 | 2500 | 0.3798 | 11.1400 | | 0.0224 | 5.06 | 3000 | 0.4086 | 10.9767 | | 0.0033 | 6.05 | 3500 | 0.4452 | 10.3392 | | 0.0028 | 7.03 | 4000 | 0.4522 | 10.7946 | - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2