csukuangfj
funasr-nano-with-ctc
streaming-paraformer-zh
sherpa-onnx-paraformer-zh-2023-09-14
sherpa-onnx-paraformer-zh-2024-03-09
vits-coqui-uk-mai
paraformer-onnxruntime-python-example
vits-hf-zh-jp-zomehwh
sherpa-onnx-streaming-paraformer-trilingual-zh-cantonese-en
icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13
vits-coqui-en-vctk
vits-cantonese-hf-xiaomaiiwn
vits-coqui-pl-mai_female
vits-coqui-sv-cv
SenseVoiceSmall
Highlights SenseVoice专注于高精度多语言语音识别、情感辨识和音频事件检测 - 多语言识别: 采用超过40万小时数据训练,支持超过50种语言,识别效果上优于Whisper模型。 - 富文本识别: - 具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。 - 支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。 - 高效推理: SenseVoice-Small模型采用非自回归端到端框架,推理延迟极低,10s音频推理仅耗时70ms,15倍优于Whisper-Large。 - 微调定制: 具备便捷的微调脚本与策略,方便用户根据业务场景修复长尾样本问题。 - 服务部署: 具有完整的服务部署链路,支持多并发请求,支持客户端语言有,python、c++、html、java与c#等。 SenseVoice开源项目介绍 SenseVoice 开源模型是多语言音频理解模型,具有包括语音识别、语种识别、语音情感识别,声学事件检测能力。 模型结构图 SenseVoice多语言音频理解模型,支持语音识别、语种识别、语音情感识别、声学事件检测、逆文本正则化等能力,采用工业级数十万小时的标注音频进行模型训练,保证了模型的通用识别效果。模型可以被应用于中文、粤语、英语、日语、韩语音频识别,并输出带有情感和事件的富文本转写结果。 SenseVoice-Small是基于非自回归端到端框架模型,为了指定任务,我们在语音特征前添加四个嵌入作为输入传递给编码器: - LID:用于预测音频语种标签。 - SER:用于预测音频情感标签。 - AED:用于预测音频包含的事件标签。 - ITN:用于指定识别输出文本是否进行逆文本正则化。 参数说明: - `modeldir`:模型名称,或本地磁盘中的模型路径。 - `trustremotecode`: - `True`表示model代码实现从`remotecode`处加载,`remotecode`指定`model`具体代码的位置(例如,当前目录下的`model.py`),支持绝对路径与相对路径,以及网络url。 - `False`表示,model代码实现为 FunASR 内部集成版本,此时修改当前目录下的`model.py`不会生效,因为加载的是funasr内部版本,模型代码点击查看。 - `vadmodel`:表示开启VAD,VAD的作用是将长音频切割成短音频,此时推理耗时包括了VAD与SenseVoice总耗时,为链路耗时,如果需要单独测试SenseVoice模型耗时,可以关闭VAD模型。 - `vadkwargs`:表示VAD模型配置,`maxsinglesegmenttime`: 表示`vadmodel`最大切割音频时长, 单位是毫秒ms。 - `useitn`:输出结果中是否包含标点与逆文本正则化。 - `batchsizes` 表示采用动态batch,batch中总音频时长,单位为秒s。 - `mergevad`:是否将 vad 模型切割的短音频碎片合成,合并后长度为`mergelengths`,单位为秒s。 - `banemounk`:禁用emounk标签,禁用后所有的句子都会被赋与情感标签。默认`False` 模型下载 上面代码会自动下载模型,如果您需要离线下载好模型,可以通过下面代码,手动下载,之后指定模型本地路径即可。 语音识别效果 我们在开源基准数据集(包括 AISHELL-1、AISHELL-2、Wenetspeech、Librispeech和Common Voice)上比较了SenseVoice与Whisper的多语言语音识别性能和推理效率。在中文和粤语识别效果上,SenseVoice-Small模型具有明显的效果优势。 情感识别效果 由于目前缺乏被广泛使用的情感识别测试指标和方法,我们在多个测试集的多种指标进行测试,并与近年来Benchmark上的多个结果进行了全面的对比。所选取的测试集同时包含中文/英文两种语言以及表演、影视剧、自然对话等多种风格的数据,在不进行目标数据微调的前提下,SenseVoice能够在测试数据上达到和超过目前最佳情感识别模型的效果。 同时,我们还在测试集上对多个开源情感识别模型进行对比,结果表明,SenseVoice-Large模型可以在几乎所有数据上都达到了最佳效果,而SenseVoice-Small模型同样可以在多数数据集上取得超越其他开源模型的效果。 尽管SenseVoice只在语音数据上进行训练,它仍然可以作为事件检测模型进行单独使用。我们在环境音分类ESC-50数据集上与目前业内广泛使用的BEATS与PANN模型的效果进行了对比。SenseVoice模型能够在这些任务上取得较好的效果,但受限于训练数据与训练方式,其事件分类效果专业的事件检测模型相比仍然有一定的差距。 推理效率 SenseVoice-Small模型采用非自回归端到端架构,推理延迟极低。在参数量与Whisper-Small模型相当的情况下,比Whisper-Small模型推理速度快7倍,比Whisper-Large模型快17倍。同时SenseVoice-small模型在音频时长增加的情况下,推理耗时也无明显增加。
vits-mms-eng
sherpa-onnx-paraformer-zh-small-2024-03-09
sherpa-onnx-punct-ct-transformer-zh-en-vocab272727-2024-04-12
vits-coqui-en-ljspeech
vits-coqui-cs-cv
vits-coqui-da-cv
vits-coqui-et-cv
vits-coqui-es-css10
vits-coqui-hu-css10
vits-coqui-hr-cv
vits-coqui-lt-cv
vits-coqui-mt-cv
vits-coqui-pt-cv
vits-coqui-ro-cv
vits-coqui-sk-cv
vits-coqui-bn-custom_female
vits-mms-deu
vits-mms-fra
vits-mms-rus
vits-mms-nan
sherpa-onnx-paraformer-en-2024-03-09
ncnn-vits-piper-en_GB-alba-medium-fp16
ncnn-vits-piper-en_GB-cori-high-fp16
ncnn-vits-piper-en_GB-northern_english_male-medium-fp16
ncnn-vits-piper-en_GB-vctk-medium-fp16
ncnn-vits-piper-en_GB-vctk-medium
ncnn-vits-piper-en_GB-miro-high-fp16
A fast and local neural text-to-speech engine that embeds [espeak-ng][] for phonemization. 🎧 [Samples][samples] 💡 [Demo][demo] 🗣️ [Voices][voices] 🖥️ [Command-line interface][cli] 🌐 [Web server][api-http] 🐍 [Python API][api-python] 🔧 [C/C++ API][libpiper] 🏋️ [Training new voices][training] 🛠️ [Building manually][building] Home Assistant NVDA - NonVisual Desktop Access Image Captioning for the Visually Impaired and Blind: A Recipe for Low-Resource Languages Video tutorial by Thorsten Müller Open Voice Operating System JetsonGPT LocalAI Lernstick EDU / EXAM: reading clipboard content aloud with language detection Natural Speech - A plugin for Runelite, an OSRS Client mintPiper Vim-Piper POTaTOS Narration Studio Basic TTS - Simple online text-to-speech converter. [espeak-ng]: https://github.com/espeak-ng/espeak-ng [cli]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/CLI.md [api-http]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/APIHTTP.md [api-python]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/APIPYTHON.md [training]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/TRAINING.md [building]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/BUILDING.md [voices]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/VOICES.md [samples]: https://rhasspy.github.io/piper-samples [demo]: https://rhasspy.github.io/piper-samples/demo.html [libpiper]: https://github.com/OHF-Voice/piper1-gpl/tree/main/libpiper See https://huggingface.co/OpenVoiceOS/piperttsen-GBmiro and https://github.com/OHF-Voice/piper1-gpl/discussions/27 See also https://github.com/k2-fsa/sherpa-onnx/pull/2480 This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). - 🔄 The author may relax the restrictions in the future (e.g., allow commercial use), but will not make them stricter Important: You must include this license when redistributing the model or any derivatives.
vits-piper-en_US-glados
sherpa-onnx-apk
sherpa-ncnn-conv-emformer-transducer-2022-12-06
spleeter-torch
sherpa-onnx-libs
sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17
k2fsa-zipformer-bilingual-zh-en-t
Forked from https://huggingface.co/pfluo/k2fsa-zipformer-chinese-english-mixed Chinese-English ASR model using k2-zipformer-streaming AIShell-1 and Wenetspeech testset results with modified-beam-search streaming decode using epoch-12.pt | decodechunklen | AIShell-1 | TESTNET | TESTMEETING | |------------------|-----------|----------|--------------| | 64 | 4.79 | 11.6 | 12.64 || Model unit is char+bpe as `data/langcharbpe/tokens.txt`
onnxruntime-libs
sherpa-onnx-whisper-large-v3
speaker-embedding-models
sherpa-onnx-streaming-zipformer-ar_en_id_ja_ru_th_vi_zh-2025-02-10
android-onnxruntime-libs
ios-onnxruntime
vits-piper-fa_IR-gyro-medium
sherpa-ncnn-apk
sherpa-ncnn-streaming-zipformer-small-bilingual-zh-en-2023-02-16
sherpa-onnx-streaming-zipformer-en-2023-06-26
vits-piper-pt_BR-faber-medium
kokoro-multi-lang-v1_0
k2
vits-piper-en_US-glados-high
Introduction See https://drive.google.com/file/d/1t2D7zP-e2flduS5duHmUMB9RjuGqWK/view
k2fsa-zipformer-chinese-english-mixed
icefall-asr-librispeech-lstm-transducer-stateless2-2022-09-03
sherpa-onnx-streaming-paraformer-bilingual-zh-en
sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23
vits-piper-en_GB-alan-medium
vits-zh-hf-fanchen-C
vits-piper-tr_TR-dfki-medium
vits-melo-tts-zh_en
sherpa-onnx-nemo-parakeet_tdt_ctc_110m-en-36000
sherpa-onnx-moonshine-base-en-int8
sherpa-onnx-nemo-parakeet-tdt-0.6b-v2-int8
sherpa-onnx-streaming-zipformer-zh-int8-2025-06-30
This model is converted from https://huggingface.co/yuekai/icefall-asr-multi-zh-hans-zipformer-large The training code can be found at https://github.com/k2-fsa/icefall/blob/master/egs/multizh-hans/ASR/RESULTS.md#multi-chinese-datasets-char-based-training-results-streaming-on-zipformer-large-model
sherpa-onnx-nemo-parakeet-tdt-0.6b-v3
sherpa-onnx-tts-samples
sherpa-onnx-wheels
sherpa-onnx-harmony-os
vits-piper-es_AR-daniela-high
icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21
icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09
icefall-asr-librispeech-transducer-bpe-500-2021-12-17
icefall_asr_yesno_tdnn
test-data-for-optimized-transducer
cudnn-for-windows
sherpa-ncnn-2022-09-05
icefall-asr-wenetspeech-lstm-transducer-stateless-2022-10-14
wenet-chinese-model
icefall-asr-wenetspeech-conv-emformer-transducer-stateless-small-2022-12-08
tal_csasr
sherpa-ncnn-streaming-zipformer-bilingual-zh-en-2023-02-13
sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20
sherpa-ncnn-pre-compiled-binaries
sherpa-onnx-paraformer-zh-2023-03-28
sherpa-onnx-nemo-ctc-en-conformer-small
sherpa-onnx-nemo-ctc-de-conformer-large
sherpa-ncnn-android-libs
sherpa-onnx-whisper-base
sherpa-onnx-whisper-medium
vad
vits-ljs
vits-vctk
vits-piper-de_DE-kerstin-low
vits-piper-en_US-amy-medium
vits-piper-en_US-lessac-high
vits-piper-en_US-kathleen-low
vits-piper-en_GB-aru-medium
vits-piper-en_GB-southern_english_female-low
vits-piper-kk_KZ-issai-high
vits-piper-pt_BR-edresson-low
vits-piper-pt_PT-tugao-medium
piper-phonemize-wheels
sherpa-onnx-whisper-distil-large-v2
vits-piper-fa_IR-amir-medium
sherpa-onnx-paraformer-trilingual-zh-cantonese-en
sherpa-onnx-pyannote-segmentation-3-0
sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24
This folder contains scripts for converting models from https://github.com/salute-developers/GigaAM to sherpa-onnx. The ASR models are for Russian speech recognition in this folder. Please see the license of the models at https://github.com/salute-developers/GigaAM/blob/main/LICENSE
moonshine-fork
sherpa-onnx-moonshine-tiny-en-int8
harmonyos-commandline-tools
sherpa-onnx-reverb-diarization-v2
sherpa-onnx-hifigan
sherpa-onnx-fire-red-asr-large-zh_en-2025-02-16
This model is converted from https://github.com/FireRedTeam/FireRedASR See also https://huggingface.co/FireRedTeam/FireRedASR-AED-L
kokoro-multi-lang-v1_1
sherpa-onnx-nemo-transducer-giga-am-v2-russian-2025-04-19
sherpa-onnx-nemo-ctc-giga-am-v2-russian-2025-04-19
en_US-glados-high
Introduction See https://drive.google.com/file/d/1t2D7zP-e2flduS5duHmUMB9RjuGqWK/view
sherpa-onnx-streaming-zipformer-ctc-zh-int8-2025-06-30
This model is converted from https://huggingface.co/yuekai/icefall-asr-multi-zh-hans-zipformer-large The training code can be found at https://github.com/k2-fsa/icefall/blob/master/egs/multizh-hans/ASR/RESULTS.md#multi-chinese-datasets-char-based-training-results-streaming-on-zipformer-large-model
sherpa-onnx-fire-red-asr-large-zh_en-fp16-2025-02-16
mlx-sense-voice-small-safe-tensors
sherpa-onnx-nemo-parakeet-tdt-0.6b-v3-int8
sherpa-onnx-streaming-t-one-russian-2025-09-08
This folder contains scripts for exporting models from https://github.com/voicekit-team/T-one to sherpa-onnx.