mrfakename
NeuralOrca-7B-v1
MegaTTS3-VoiceCloning
mistral-small-3.1-24b-instruct-2503-gguf
OpenF5-TTS-Base
styletts2-detector
ZuluVision-MoviiGen1.1
EmoActMimoV2-4Ep-Gemini-Merged
EmoAct-MiMo
Vocalino-GRPO-Ckpt1500-Working
Jefferson Test
Backup of https://huggingface.co/google/jefferson-test This is the model card of a š¤ transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Ministral-3-8B-Instruct-2512-Llamafied-TextOnly
Ministral-3-14B-Instruct-2512-Llamafied-TextOnly
Ministral-3-3B-Instruct-2512-Llamafied-TextOnly
EmoAct-MiMo-v1.1
mistral-small-3.1-24b-instruct-2503-hf
granite-tts-1b
SparkAudio-Spark-TTS-0.5B
MuASR-3B-v0.1
starvector-starvector-1b-im2svg
MuVV
parakeet-elise
mistral-small-3.1-24b-base-2503-hf
- 24B Instruct GGUF - 24B Instruct HF - 24B Base HF (this model) Mistral Small 3.1 Base 24B converted to the HF format. Only the text component has been converted to HF, does not work as a vision model.
failed-reasoning-1
lm-300m-converted
lm-300m-base
EmoAct-MiMo-v1.2
Apriel-5B-Instruct
lm-300m-inst
This is the model card of a š¤ transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
dreamwriter-0.6b-beta
This version has not been trained for safety yet, proper precautions should be taken.
vibevoice-asr-en-emilia-yodas-616h-fft-events3x-20260322
starvector-starvector-8b-im2svg
Freepik-test-texture
refusal
Apriel-5B-Instruct-llamafied
styletts2-detector-turbo
microsoft-Phi-4-multimodal-instruct
HiDream-I1-Dev
reasoning-small-1.5b-v0.1
HiDream-I1-Full
Apriel-5B-Base
WizardChatML-7B-v0
qwen3-0.6b-writing
llamaphi-3-128k-instruct
CosyVoice2-0.5B
Unofficial mirror for the CosyVoice2 0.5B model hosted on ModelScope. Original model: https://www.modelscope.cn/models/iic/CosyVoice2-0.5B šš» CosyVoice2 Demos šš» [CosyVoice2 Paper][CosyVoice2 Studio] šš» CosyVoice Demos šš» [CosyVoice Paper][CosyVoice Studio][CosyVoice Code] For `SenseVoice`, visit SenseVoice repo and SenseVoice space. - [x] CosyVoice2-0.5B model release - [x] CosyVoice2-0.5B streaming inference with no quality degradation - [x] Flow matching training support - [x] WeTextProcessing support when ttsfrd is not avaliable - [x] Fastapi server and client - [x] Repetition Aware Sampling(RAS) inference for llm stability - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization - [x] 25hz cosyvoice base model - [x] 25hz cosyvoice voice conversion model - [ ] CosyVoice2-0.5B bistream inference support - [ ] CosyVoice2-0.5B training and finetune recipie - [ ] CosyVoice-500M trained with more multi-lingual data - [ ] More... - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html - Create Conda env: We strongly recommend that you download our pretrained `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource. If you are expert in this field, and you are only interested in training your own CosyVoice model from scratch, you can skip this step. Optionaly, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance. Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default. For zeroshot/crosslingual inference, please use `CosyVoice2-0.5B` or `CosyVoice-300M` model. For sft inference, please use `CosyVoice-300M-SFT` model. For instruct inference, please use `CosyVoice-300M-Instruct` model. We strongly recommend using `CosyVoice2-0.5B` model for better streaming performance. First, add `thirdparty/Matcha-TTS` to your `PYTHONPATH`. You can use our web demo page to get familiar with CosyVoice quickly. We support sft/zeroshot/crosslingual/instruct inference in web demo. For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`. You can get familiar with CosyVoice following this recipie. Optionally, if you want to use grpc for service deployment, you can run following steps. Otherwise, you can just ignore this step. You can also scan the QR code to join our official Dingding chat group. 1. We borrowed a lot of code from FunASR. 2. We borrowed a lot of code from FunCodec. 3. We borrowed a lot of code from Matcha-TTS. 4. We borrowed a lot of code from AcademiCodec. 5. We borrowed a lot of code from WeNet. Disclaimer The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.