myshell-ai

12 models • 4 total models in database
Sort by:

MeloTTS-Japanese

--- license: mit language: - ko pipeline_tag: text-to-speech ---

license:mit
373,489
11

MeloTTS-Korean

--- license: mit language: - ko pipeline_tag: text-to-speech ---

license:mit
299,064
38

MeloTTS-English

--- license: mit language: - ko pipeline_tag: text-to-speech ---

license:mit
191,137
295

MeloTTS-Chinese

license:mit
146,879
92

MeloTTS-Spanish

license:mit
96,385
24

MeloTTS-French

license:mit
51,718
16

MeloTTS-English-v3

license:mit
14,406
32

MeloTTS-English-v2

license:mit
177
24

OpenVoice

OpenVoice, a versatile instant voice cloning approach that requires only a short audio clip from the reference speaker to replicate their voice and generate speech in multiple languages. OpenVoice enables granular control over voice styles, including emotion, accent, rhythm, pauses, and intonation, in addition to replicating the tone color of the reference speaker. OpenVoice also achieves zero-shot cross-lingual voice cloning for languages not included in the massive-speaker training set. Features - Accurate Tone Color Cloning. OpenVoice can accurately clone the reference tone color and generate speech in multiple languages and accents. - Flexible Voice Style Control. OpenVoice enables granular control over voice styles, such as emotion and accent, as well as other style parameters including rhythm, pauses, and intonation. - Zero-shot Cross-lingual Voice Cloning. Neither of the language of the generated speech nor the language of the reference speech needs to be presented in the massive-speaker multi-lingual training dataset. How to Use Please see usage for detailed instructions.

license:mit
0
481

OpenVoiceV2

In April 2024, we release OpenVoice V2, which includes all features in V1 and has: 1. Better Audio Quality. OpenVoice V2 adopts a different training strategy that delivers better audio quality. 2. Native Multi-lingual Support. English, Spanish, French, Chinese, Japanese and Korean are natively supported in OpenVoice V2. 3. Free Commercial Use. Starting from April 2024, both V2 and V1 are released under MIT License. Free for commercial use. Features - Accurate Tone Color Cloning. OpenVoice can accurately clone the reference tone color and generate speech in multiple languages and accents. - Flexible Voice Style Control. OpenVoice enables granular control over voice styles, such as emotion and accent, as well as other style parameters including rhythm, pauses, and intonation. - Zero-shot Cross-lingual Voice Cloning. Neither of the language of the generated speech nor the language of the reference speech needs to be presented in the massive-speaker multi-lingual training dataset. How to Use Please see usage for detailed instructions. - Quick Use: directly use OpenVoice without installation. - Linux Install: for researchers and developers only. - V1 - V2 - Install on Other Platforms: unofficial installation guide contributed by the community The input speech audio of OpenVoice can be in Any Language. OpenVoice can clone the voice in that speech audio, and use the voice to speak in multiple languages. For quick use, we recommend you to try the already deployed services: - British English - American English - Indian English - Australian English - Spanish - French - Chinese - Japanese - Korean This section is only for developers and researchers who are familiar with Linux, Python and PyTorch. Clone this repo, and run No matter if you are using V1 or V2, the above installation is the same. Download the checkpoint from here and extract it to the `checkpoints` folder. 1. Flexible Voice Style Control. Please see `demopart1.ipynb` for an example usage of how OpenVoice enables flexible style control over the cloned voice. 2. Cross-Lingual Voice Cloning. Please see `demopart2.ipynb` for an example for languages seen or unseen in the MSML training set. 3. Gradio Demo.. We provide a minimalist local gradio demo here. We strongly suggest the users to look into `demopart1.ipynb`, `demopart2.ipynb` and the QnA if they run into issues with the gradio demo. Launch a local gradio demo with `python -m openvoiceapp --share`. Download the checkpoint from here and extract it to the `checkpointsv2` folder. Demo Usage. Please see `demopart3.ipynb` for example usage of OpenVoice V2. Now it natively supports English, Spanish, French, Chinese, Japanese and Korean. This section provides the unofficial installation guides by open-source contributors in the community: - Windows - Guide by @Alienpups - You are welcome to contribute if you have a better installation guide. We will list you here. - Docker - Guide by @StevenJSCF - You are welcome to contribute if you have a better installation guide. We will list you here.

license:mit
0
458

DreamVoice

0
23

ShellAgent

0
7