bofenghuang
whisper-large-v3-french-distil-dec16
vigostral-7b-chat
vigogne-2-7b-instruct
vigogne-2-7b-chat
vigogne-2-13b-instruct
vigogne-33b-instruct
vigogne-13b-instruct
vigogne-7b-instruct
vigogne-7b-chat
vigogne-13b-chat
vigogne-2-13b-chat
vigogne-2-70b-chat
whisper-large-v3-french
asr-wav2vec2-ctc-french
Whisper Small Cv11 French
This model is a fine-tuned version of openai/whisper-small, trained on the mozilla-foundation/commonvoice110 fr dataset. When using the model make sure that your speech input is also sampled at 16Khz. This model also predicts casing and punctuation. Below are the WERs of the pre-trained models on the Common Voice 9.0, Multilingual LibriSpeech, Voxpopuli and Fleurs. These results are reported in the original paper. | Model | Common Voice 9.0 | MLS | VoxPopuli | Fleurs | | --- | :---: | :---: | :---: | :---: | | openai/whisper-small | 22.7 | 16.2 | 15.7 | 15.0 | | openai/whisper-medium | 16.0 | 8.9 | 12.2 | 8.7 | | openai/whisper-large | 14.7 | 8.9 | 11.0 | 7.7 | | openai/whisper-large-v2 | 13.9 | 7.3 | 11.4 | 8.3 | Below are the WERs of the fine-tuned models on the Common Voice 11.0, Multilingual LibriSpeech, Voxpopuli, and Fleurs. Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of apostrophe. The results in the table are reported as `WER (greedy search) / WER (beam search with beam width 5)`. | Model | Common Voice 11.0 | MLS | VoxPopuli | Fleurs | | --- | :---: | :---: | :---: | :---: | | bofenghuang/whisper-small-cv11-french | 11.76 / 10.99 | 9.65 / 8.91 | 14.45 / 13.66 | 10.76 / 9.83 | | bofenghuang/whisper-medium-cv11-french | 9.03 / 8.54 | 6.34 / 5.86 | 11.64 / 11.35 | 7.13 / 6.85 | | bofenghuang/whisper-medium-french | 9.03 / 8.73 | 4.60 / 4.44 | 9.53 / 9.46 | 6.33 / 5.94 | | bofenghuang/whisper-large-v2-cv11-french | 8.05 / 7.67 | 5.56 / 5.28 | 11.50 / 10.69 | 5.42 / 5.05 | | bofenghuang/whisper-large-v2-french | 8.15 / 7.83 | 4.20 / 4.03 | 9.10 / 8.66 | 5.22 / 4.98 |
whisper-large-v3-distil-it-v0.2
Whisper Medium French
This model is a fine-tuned version of openai/whisper-medium, trained on a composite dataset comprising of over 2200 hours of French speech audio, using the train and the validation splits of Common Voice 11.0, Multilingual LibriSpeech, Voxpopuli, Fleurs, Multilingual TEDx, MediaSpeech, and African Accented French. When using the model make sure that your speech input is sampled at 16Khz. This model doesn't predict casing or punctuation. Below are the WERs of the pre-trained models on the Common Voice 9.0, Multilingual LibriSpeech, Voxpopuli and Fleurs. These results are reported in the original paper. | Model | Common Voice 9.0 | MLS | VoxPopuli | Fleurs | | --- | :---: | :---: | :---: | :---: | | openai/whisper-small | 22.7 | 16.2 | 15.7 | 15.0 | | openai/whisper-medium | 16.0 | 8.9 | 12.2 | 8.7 | | openai/whisper-large | 14.7 | 8.9 | 11.0 | 7.7 | | openai/whisper-large-v2 | 13.9 | 7.3 | 11.4 | 8.3 | Below are the WERs of the fine-tuned models on the Common Voice 11.0, Multilingual LibriSpeech, Voxpopuli, and Fleurs. Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of apostrophe. The results in the table are reported as `WER (greedy search) / WER (beam search with beam width 5)`. | Model | Common Voice 11.0 | MLS | VoxPopuli | Fleurs | | --- | :---: | :---: | :---: | :---: | | bofenghuang/whisper-small-cv11-french | 11.76 / 10.99 | 9.65 / 8.91 | 14.45 / 13.66 | 10.76 / 9.83 | | bofenghuang/whisper-medium-cv11-french | 9.03 / 8.54 | 6.34 / 5.86 | 11.64 / 11.35 | 7.13 / 6.85 | | bofenghuang/whisper-medium-french | 9.03 / 8.73 | 4.60 / 4.44 | 9.53 / 9.46 | 6.33 / 5.94 | | bofenghuang/whisper-large-v2-cv11-french | 8.05 / 7.67 | 5.56 / 5.28 | 11.50 / 10.69 | 5.42 / 5.05 | | bofenghuang/whisper-large-v2-french | 8.15 / 7.83 | 4.20 / 4.03 | 9.10 / 8.66 | 5.22 / 4.98 |
phonemizer-wav2vec2-ctc-french
whisper-medium-cv11-german
asr-wav2vec2-xls-r-1b-ctc-french
whisper-large-v2-french
Whisper Medium Cv11 French
This model is a fine-tuned version of openai/whisper-medium, trained on the mozilla-foundation/commonvoice110 fr dataset. When using the model make sure that your speech input is also sampled at 16Khz. This model also predicts casing and punctuation. Below are the WERs of the pre-trained models on the Common Voice 9.0, Multilingual LibriSpeech, Voxpopuli and Fleurs. These results are reported in the original paper. | Model | Common Voice 9.0 | MLS | VoxPopuli | Fleurs | | --- | :---: | :---: | :---: | :---: | | openai/whisper-small | 22.7 | 16.2 | 15.7 | 15.0 | | openai/whisper-medium | 16.0 | 8.9 | 12.2 | 8.7 | | openai/whisper-large | 14.7 | 8.9 | 11.0 | 7.7 | | openai/whisper-large-v2 | 13.9 | 7.3 | 11.4 | 8.3 | Below are the WERs of the fine-tuned models on the Common Voice 11.0, Multilingual LibriSpeech, Voxpopuli, and Fleurs. Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of apostrophe. The results in the table are reported as `WER (greedy search) / WER (beam search with beam width 5)`. | Model | Common Voice 11.0 | MLS | VoxPopuli | Fleurs | | --- | :---: | :---: | :---: | :---: | | bofenghuang/whisper-small-cv11-french | 11.76 / 10.99 | 9.65 / 8.91 | 14.45 / 13.66 | 10.76 / 9.83 | | bofenghuang/whisper-medium-cv11-french | 9.03 / 8.54 | 6.34 / 5.86 | 11.64 / 11.35 | 7.13 / 6.85 | | bofenghuang/whisper-medium-french | 9.03 / 8.73 | 4.60 / 4.44 | 9.53 / 9.46 | 6.33 / 5.94 | | bofenghuang/whisper-large-v2-cv11-french | 8.05 / 7.67 | 5.56 / 5.28 | 11.50 / 10.69 | 5.42 / 5.05 | | bofenghuang/whisper-large-v2-french | 8.15 / 7.83 | 4.20 / 4.03 | 9.10 / 8.66 | 5.22 / 4.98 |
whisper-large-v3-distil-fr-v0.2
whisper-large-v2-cv11-german
whisper-large-v2-cv11-french-ct2
whisper-small-cv11-german
stt_fr_fastconformer_hybrid_large
whisper-large-v2-cv11-french
whisper-large-v2-cv11-german-ct2
whisper-large-v3-french-distil-dec8
vigogne-mpt-7b-instruct
whisper-large-v3-french-distil-dec2
whisper-large-v3-distil-multi4-v0.2
whisper-large-v3-french-distil-dec4
parakeet-tdt-0.6b-v3-hybrid
Extend nvidia/parakeet-tdt-0.6b-v3 from TDT to hybrid TDT-CTC: - Kept encoder and TDT decoder, reinitialized CTC decoder with the same vocab of 8192 tokens - Can be used for pure CTC or hybrid CTC-RNNT finetuning Sanity check seen below passed, getting the same transcriptions using TDT and gibberish with reinitialized CTC: