emre
wav2vec2-xls-r-300m-Russian-small
Whisper Medium Turkish 2
This model is a fine-tuned version of openai/whisper-medium on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.211673 - Wer: 18.51 This model is the openai whisper medium transformer adapted for Turkish audio to text transcription. This model has weight decay set to 0.1 to cope with overfitting. The model is available through its HuggingFace web app Data used for training is the initial 10% of train and validation of Turkish Common Voice 11.0 from Mozilla Foundation. Weight decay showed to have slightly better result also on the evaluation dataset. After loading the pre trained model, it has been trained on the dataset. The following hyperparameters were used during training: - learningrate: 1e-05 - trainbatchsize: 16 - evalbatchsize: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lrschedulertype: linear - lrschedulerwarmupsteps: 500 - trainingsteps: 4000 - mixedprecisiontraining: Native AMP - weightdecay: 0.1 - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
turkish-sentiment-analysis
spanish-dialoGPT
mybankconcept
llama-2-13b-code-122k
switch-base-8-finetuned-samsum
wav2vec2-large-xls-r-300m-tr
distilbert-base-uncased-finetuned-squad
wav2vec2-xls-r-300m-ab-CV8
wav2vec2-large-xlsr-53-W2V2-TATAR-SMALL
distilgpt2-pretrained-tr-10e
xglm-564M-turkish
gemma-3-12b-it-tr-reasoning40k
detr-resnet-50_finetuned_cppe5
gemma-3-27b-it-tr-reasoning40k-4bi
gemma-2-9b-Turkish-Lora-Continue-Pre-Trained
opus-mt-tr-en-finetuned-en-to-tr
speecht5_tts_tr
gemma-3-1b-it-tr-reasoning40k
- Developed by: emre - License: apache-2.0 - Finetuned from model : unsloth/gemma-3-1b-it This gemma3text model was trained 2x faster with Unsloth and Huggingface's TRL library.