preszzz
drone-audio-detection-05-17-trial-0
This model is a fine-tuned version of MIT/ast-finetuned-audioset-10-10-0.4593 on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0131 - Accuracy: 0.996 - Precision: 0.9987 - Recall: 0.9962 - F1: 0.9974 The following hyperparameters were used during training: - learningrate: 1.1463891217797098e-05 - trainbatchsize: 32 - evalbatchsize: 8 - seed: 42 - gradientaccumulationsteps: 4 - totaltrainbatchsize: 128 - optimizer: Use OptimizerNames.ADAMWTORCHFUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: cosine - lrschedulerwarmupratio: 0.1651375762118582 - numepochs: 5 - mixedprecisiontraining: Native AMP | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0441 | 1.0 | 63 | 0.0255 | 0.992 | 0.9987 | 0.9910 | 0.9949 | | 0.0176 | 2.0 | 126 | 0.0160 | 0.996 | 1.0 | 0.9949 | 0.9974 | | 0.0027 | 3.0 | 189 | 0.0128 | 0.9955 | 0.9987 | 0.9955 | 0.9971 | | 0.0003 | 4.0 | 252 | 0.0132 | 0.9955 | 0.9981 | 0.9962 | 0.9971 | | 0.0002 | 4.928 | 310 | 0.0131 | 0.996 | 0.9987 | 0.9962 | 0.9974 | - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1