csukuangfj

132 models • 1 total models in database
Sort by:

funasr-nano-with-ctc

license:apache-2.0
14
0

streaming-paraformer-zh

license:apache-2.0
12
1

sherpa-onnx-paraformer-zh-2023-09-14

license:apache-2.0
11
5

sherpa-onnx-paraformer-zh-2024-03-09

6
3

vits-coqui-uk-mai

5
0

paraformer-onnxruntime-python-example

license:mit
4
5

vits-hf-zh-jp-zomehwh

4
2

sherpa-onnx-streaming-paraformer-trilingual-zh-cantonese-en

4
1

icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13

3
0

vits-coqui-en-vctk

2
1

vits-cantonese-hf-xiaomaiiwn

2
0

vits-coqui-pl-mai_female

2
0

vits-coqui-sv-cv

2
0

SenseVoiceSmall

Highlights SenseVoice专注于高精度多语言语音识别、情感辨识和音频事件检测 - 多语言识别: 采用超过40万小时数据训练,支持超过50种语言,识别效果上优于Whisper模型。 - 富文本识别: - 具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。 - 支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。 - 高效推理: SenseVoice-Small模型采用非自回归端到端框架,推理延迟极低,10s音频推理仅耗时70ms,15倍优于Whisper-Large。 - 微调定制: 具备便捷的微调脚本与策略,方便用户根据业务场景修复长尾样本问题。 - 服务部署: 具有完整的服务部署链路,支持多并发请求,支持客户端语言有,python、c++、html、java与c#等。 SenseVoice开源项目介绍 SenseVoice 开源模型是多语言音频理解模型,具有包括语音识别、语种识别、语音情感识别,声学事件检测能力。 模型结构图 SenseVoice多语言音频理解模型,支持语音识别、语种识别、语音情感识别、声学事件检测、逆文本正则化等能力,采用工业级数十万小时的标注音频进行模型训练,保证了模型的通用识别效果。模型可以被应用于中文、粤语、英语、日语、韩语音频识别,并输出带有情感和事件的富文本转写结果。 SenseVoice-Small是基于非自回归端到端框架模型,为了指定任务,我们在语音特征前添加四个嵌入作为输入传递给编码器: - LID:用于预测音频语种标签。 - SER:用于预测音频情感标签。 - AED:用于预测音频包含的事件标签。 - ITN:用于指定识别输出文本是否进行逆文本正则化。 参数说明: - `modeldir`:模型名称,或本地磁盘中的模型路径。 - `trustremotecode`: - `True`表示model代码实现从`remotecode`处加载,`remotecode`指定`model`具体代码的位置(例如,当前目录下的`model.py`),支持绝对路径与相对路径,以及网络url。 - `False`表示,model代码实现为 FunASR 内部集成版本,此时修改当前目录下的`model.py`不会生效,因为加载的是funasr内部版本,模型代码点击查看。 - `vadmodel`:表示开启VAD,VAD的作用是将长音频切割成短音频,此时推理耗时包括了VAD与SenseVoice总耗时,为链路耗时,如果需要单独测试SenseVoice模型耗时,可以关闭VAD模型。 - `vadkwargs`:表示VAD模型配置,`maxsinglesegmenttime`: 表示`vadmodel`最大切割音频时长, 单位是毫秒ms。 - `useitn`:输出结果中是否包含标点与逆文本正则化。 - `batchsizes` 表示采用动态batch,batch中总音频时长,单位为秒s。 - `mergevad`:是否将 vad 模型切割的短音频碎片合成,合并后长度为`mergelengths`,单位为秒s。 - `banemounk`:禁用emounk标签,禁用后所有的句子都会被赋与情感标签。默认`False` 模型下载 上面代码会自动下载模型,如果您需要离线下载好模型,可以通过下面代码,手动下载,之后指定模型本地路径即可。 语音识别效果 我们在开源基准数据集(包括 AISHELL-1、AISHELL-2、Wenetspeech、Librispeech和Common Voice)上比较了SenseVoice与Whisper的多语言语音识别性能和推理效率。在中文和粤语识别效果上,SenseVoice-Small模型具有明显的效果优势。 情感识别效果 由于目前缺乏被广泛使用的情感识别测试指标和方法,我们在多个测试集的多种指标进行测试,并与近年来Benchmark上的多个结果进行了全面的对比。所选取的测试集同时包含中文/英文两种语言以及表演、影视剧、自然对话等多种风格的数据,在不进行目标数据微调的前提下,SenseVoice能够在测试数据上达到和超过目前最佳情感识别模型的效果。 同时,我们还在测试集上对多个开源情感识别模型进行对比,结果表明,SenseVoice-Large模型可以在几乎所有数据上都达到了最佳效果,而SenseVoice-Small模型同样可以在多数数据集上取得超越其他开源模型的效果。 尽管SenseVoice只在语音数据上进行训练,它仍然可以作为事件检测模型进行单独使用。我们在环境音分类ESC-50数据集上与目前业内广泛使用的BEATS与PANN模型的效果进行了对比。SenseVoice模型能够在这些任务上取得较好的效果,但受限于训练数据与训练方式,其事件分类效果专业的事件检测模型相比仍然有一定的差距。 推理效率 SenseVoice-Small模型采用非自回归端到端架构,推理延迟极低。在参数量与Whisper-Small模型相当的情况下,比Whisper-Small模型推理速度快7倍,比Whisper-Large模型快17倍。同时SenseVoice-small模型在音频时长增加的情况下,推理耗时也无明显增加。

license:apache-2.0
2
0

vits-mms-eng

1
1

sherpa-onnx-paraformer-zh-small-2024-03-09

1
1

sherpa-onnx-punct-ct-transformer-zh-en-vocab272727-2024-04-12

1
1

vits-coqui-en-ljspeech

1
0

vits-coqui-cs-cv

1
0

vits-coqui-da-cv

1
0

vits-coqui-et-cv

1
0

vits-coqui-es-css10

1
0

vits-coqui-hu-css10

1
0

vits-coqui-hr-cv

1
0

vits-coqui-lt-cv

1
0

vits-coqui-mt-cv

1
0

vits-coqui-pt-cv

1
0

vits-coqui-ro-cv

1
0

vits-coqui-sk-cv

1
0

vits-coqui-bn-custom_female

1
0

vits-mms-deu

1
0

vits-mms-fra

1
0

vits-mms-rus

1
0

vits-mms-nan

1
0

sherpa-onnx-paraformer-en-2024-03-09

1
0

ncnn-vits-piper-en_GB-alba-medium-fp16

1
0

ncnn-vits-piper-en_GB-cori-high-fp16

1
0

ncnn-vits-piper-en_GB-northern_english_male-medium-fp16

1
0

ncnn-vits-piper-en_GB-vctk-medium-fp16

1
0

ncnn-vits-piper-en_GB-vctk-medium

1
0

ncnn-vits-piper-en_GB-miro-high-fp16

A fast and local neural text-to-speech engine that embeds [espeak-ng][] for phonemization. 🎧 [Samples][samples] 💡 [Demo][demo] 🗣️ [Voices][voices] 🖥️ [Command-line interface][cli] 🌐 [Web server][api-http] 🐍 [Python API][api-python] 🔧 [C/C++ API][libpiper] 🏋️ [Training new voices][training] 🛠️ [Building manually][building] Home Assistant NVDA - NonVisual Desktop Access Image Captioning for the Visually Impaired and Blind: A Recipe for Low-Resource Languages Video tutorial by Thorsten Müller Open Voice Operating System JetsonGPT LocalAI Lernstick EDU / EXAM: reading clipboard content aloud with language detection Natural Speech - A plugin for Runelite, an OSRS Client mintPiper Vim-Piper POTaTOS Narration Studio Basic TTS - Simple online text-to-speech converter. [espeak-ng]: https://github.com/espeak-ng/espeak-ng [cli]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/CLI.md [api-http]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/APIHTTP.md [api-python]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/APIPYTHON.md [training]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/TRAINING.md [building]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/BUILDING.md [voices]: https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/VOICES.md [samples]: https://rhasspy.github.io/piper-samples [demo]: https://rhasspy.github.io/piper-samples/demo.html [libpiper]: https://github.com/OHF-Voice/piper1-gpl/tree/main/libpiper See https://huggingface.co/OpenVoiceOS/piperttsen-GBmiro and https://github.com/OHF-Voice/piper1-gpl/discussions/27 See also https://github.com/k2-fsa/sherpa-onnx/pull/2480 This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). - 🔄 The author may relax the restrictions in the future (e.g., allow commercial use), but will not make them stricter Important: You must include this license when redistributing the model or any derivatives.

1
0

vits-piper-en_US-glados

0
15

sherpa-onnx-apk

0
9

sherpa-ncnn-conv-emformer-transducer-2022-12-06

license:apache-2.0
0
8

spleeter-torch

license:apache-2.0
0
8

sherpa-onnx-libs

license:apache-2.0
0
7

sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17

0
6

k2fsa-zipformer-bilingual-zh-en-t

Forked from https://huggingface.co/pfluo/k2fsa-zipformer-chinese-english-mixed Chinese-English ASR model using k2-zipformer-streaming AIShell-1 and Wenetspeech testset results with modified-beam-search streaming decode using epoch-12.pt | decodechunklen | AIShell-1 | TESTNET | TESTMEETING | |------------------|-----------|----------|--------------| | 64 | 4.79 | 11.6 | 12.64 || Model unit is char+bpe as `data/langcharbpe/tokens.txt`

license:apache-2.0
0
6

onnxruntime-libs

0
5

sherpa-onnx-whisper-large-v3

0
5

speaker-embedding-models

0
5

sherpa-onnx-streaming-zipformer-ar_en_id_ja_ru_th_vi_zh-2025-02-10

0
5

android-onnxruntime-libs

license:apache-2.0
0
4

ios-onnxruntime

license:apache-2.0
0
4

vits-piper-fa_IR-gyro-medium

0
4

sherpa-ncnn-apk

0
3

sherpa-ncnn-streaming-zipformer-small-bilingual-zh-en-2023-02-16

license:apache-2.0
0
3

sherpa-onnx-streaming-zipformer-en-2023-06-26

license:apache-2.0
0
3

vits-piper-pt_BR-faber-medium

0
3

kokoro-multi-lang-v1_0

0
3

k2

license:apache-2.0
0
2

vits-piper-en_US-glados-high

Introduction See https://drive.google.com/file/d/1t2D7zP-e2flduS5duHmUMB9RjuGqWK/view

0
2

k2fsa-zipformer-chinese-english-mixed

license:apache-2.0
0
2

icefall-asr-librispeech-lstm-transducer-stateless2-2022-09-03

0
2

sherpa-onnx-streaming-paraformer-bilingual-zh-en

license:apache-2.0
0
2

sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23

license:apache-2.0
0
2

vits-piper-en_GB-alan-medium

0
2

vits-zh-hf-fanchen-C

0
2

vits-piper-tr_TR-dfki-medium

0
2

vits-melo-tts-zh_en

0
2

sherpa-onnx-nemo-parakeet_tdt_ctc_110m-en-36000

0
2

sherpa-onnx-moonshine-base-en-int8

0
2

sherpa-onnx-nemo-parakeet-tdt-0.6b-v2-int8

NaNK
license:cc-by-4.0
0
2

sherpa-onnx-streaming-zipformer-zh-int8-2025-06-30

This model is converted from https://huggingface.co/yuekai/icefall-asr-multi-zh-hans-zipformer-large The training code can be found at https://github.com/k2-fsa/icefall/blob/master/egs/multizh-hans/ASR/RESULTS.md#multi-chinese-datasets-char-based-training-results-streaming-on-zipformer-large-model

0
2

sherpa-onnx-nemo-parakeet-tdt-0.6b-v3

NaNK
0
2

sherpa-onnx-tts-samples

0
1

sherpa-onnx-wheels

license:apache-2.0
0
1

sherpa-onnx-harmony-os

0
1

vits-piper-es_AR-daniela-high

0
1

icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21

0
1

icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09

0
1

icefall-asr-librispeech-transducer-bpe-500-2021-12-17

0
1

icefall_asr_yesno_tdnn

0
1

test-data-for-optimized-transducer

0
1

cudnn-for-windows

0
1

sherpa-ncnn-2022-09-05

license:apache-2.0
0
1

icefall-asr-wenetspeech-lstm-transducer-stateless-2022-10-14

0
1

wenet-chinese-model

0
1

icefall-asr-wenetspeech-conv-emformer-transducer-stateless-small-2022-12-08

0
1

tal_csasr

0
1

sherpa-ncnn-streaming-zipformer-bilingual-zh-en-2023-02-13

license:apache-2.0
0
1

sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20

license:apache-2.0
0
1

sherpa-ncnn-pre-compiled-binaries

license:apache-2.0
0
1

sherpa-onnx-paraformer-zh-2023-03-28

license:mit
0
1

sherpa-onnx-nemo-ctc-en-conformer-small

license:apache-2.0
0
1

sherpa-onnx-nemo-ctc-de-conformer-large

license:apache-2.0
0
1

sherpa-ncnn-android-libs

license:apache-2.0
0
1

sherpa-onnx-whisper-base

0
1

sherpa-onnx-whisper-medium

0
1

vad

0
1

vits-ljs

license:apache-2.0
0
1

vits-vctk

license:apache-2.0
0
1

vits-piper-de_DE-kerstin-low

0
1

vits-piper-en_US-amy-medium

0
1

vits-piper-en_US-lessac-high

0
1

vits-piper-en_US-kathleen-low

0
1

vits-piper-en_GB-aru-medium

0
1

vits-piper-en_GB-southern_english_female-low

0
1

vits-piper-kk_KZ-issai-high

0
1

vits-piper-pt_BR-edresson-low

0
1

vits-piper-pt_PT-tugao-medium

0
1

piper-phonemize-wheels

0
1

sherpa-onnx-whisper-distil-large-v2

0
1

vits-piper-fa_IR-amir-medium

0
1

sherpa-onnx-paraformer-trilingual-zh-cantonese-en

0
1

sherpa-onnx-pyannote-segmentation-3-0

0
1

sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24

This folder contains scripts for converting models from https://github.com/salute-developers/GigaAM to sherpa-onnx. The ASR models are for Russian speech recognition in this folder. Please see the license of the models at https://github.com/salute-developers/GigaAM/blob/main/LICENSE

0
1

moonshine-fork

license:mit
0
1

sherpa-onnx-moonshine-tiny-en-int8

0
1

harmonyos-commandline-tools

0
1

sherpa-onnx-reverb-diarization-v2

0
1

sherpa-onnx-hifigan

0
1

sherpa-onnx-fire-red-asr-large-zh_en-2025-02-16

This model is converted from https://github.com/FireRedTeam/FireRedASR See also https://huggingface.co/FireRedTeam/FireRedASR-AED-L

0
1

kokoro-multi-lang-v1_1

0
1

sherpa-onnx-nemo-transducer-giga-am-v2-russian-2025-04-19

0
1

sherpa-onnx-nemo-ctc-giga-am-v2-russian-2025-04-19

0
1

en_US-glados-high

Introduction See https://drive.google.com/file/d/1t2D7zP-e2flduS5duHmUMB9RjuGqWK/view

0
1

sherpa-onnx-streaming-zipformer-ctc-zh-int8-2025-06-30

This model is converted from https://huggingface.co/yuekai/icefall-asr-multi-zh-hans-zipformer-large The training code can be found at https://github.com/k2-fsa/icefall/blob/master/egs/multizh-hans/ASR/RESULTS.md#multi-chinese-datasets-char-based-training-results-streaming-on-zipformer-large-model

0
1

sherpa-onnx-fire-red-asr-large-zh_en-fp16-2025-02-16

0
1

mlx-sense-voice-small-safe-tensors

0
1

sherpa-onnx-nemo-parakeet-tdt-0.6b-v3-int8

NaNK
0
1

sherpa-onnx-streaming-t-one-russian-2025-09-08

This folder contains scripts for exporting models from https://github.com/voicekit-team/T-one to sherpa-onnx.

0
1