mrfakename

50 models • 2 total models in database
Sort by:

NeuralOrca-7B-v1

NaNK
license:apache-2.0
618
5

MegaTTS3-VoiceCloning

license:apache-2.0
360
32

mistral-small-3.1-24b-instruct-2503-gguf

NaNK
license:apache-2.0
173
17

OpenF5-TTS-Base

license:apache-2.0
170
75

styletts2-detector

license:mit
85
3

ZuluVision-MoviiGen1.1

NaNK
license:apache-2.0
38
1

EmoActMimoV2-4Ep-Gemini-Merged

—
35
0

EmoAct-MiMo

NaNK
—
31
8

Vocalino-GRPO-Ckpt1500-Working

llama
30
0

Jefferson Test

Backup of https://huggingface.co/google/jefferson-test This is the model card of a šŸ¤— transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

—
23
2

Ministral-3-8B-Instruct-2512-Llamafied-TextOnly

NaNK
llama
23
1

Ministral-3-14B-Instruct-2512-Llamafied-TextOnly

NaNK
llama
21
0

Ministral-3-3B-Instruct-2512-Llamafied-TextOnly

NaNK
llama
14
0

EmoAct-MiMo-v1.1

NaNK
—
14
0

mistral-small-3.1-24b-instruct-2503-hf

NaNK
license:apache-2.0
13
10

granite-tts-1b

NaNK
—
12
1

SparkAudio-Spark-TTS-0.5B

NaNK
license:apache-2.0
11
0

MuASR-3B-v0.1

NaNK
license:cc-by-4.0
10
0

starvector-starvector-1b-im2svg

NaNK
license:apache-2.0
9
0

MuVV

—
8
0

parakeet-elise

NaNK
—
5
0

mistral-small-3.1-24b-base-2503-hf

- 24B Instruct GGUF - 24B Instruct HF - 24B Base HF (this model) Mistral Small 3.1 Base 24B converted to the HF format. Only the text component has been converted to HF, does not work as a vision model.

NaNK
license:apache-2.0
4
4

failed-reasoning-1

llama
4
0

lm-300m-converted

llama
4
0

lm-300m-base

llama
3
1

EmoAct-MiMo-v1.2

NaNK
—
3
1

Apriel-5B-Instruct

NaNK
license:mit
3
0

lm-300m-inst

This is the model card of a šŸ¤— transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

llama
3
0

dreamwriter-0.6b-beta

This version has not been trained for safety yet, proper precautions should be taken.

NaNK
—
2
1

vibevoice-asr-en-emilia-yodas-616h-fft-events3x-20260322

—
2
0

starvector-starvector-8b-im2svg

NaNK
license:apache-2.0
2
0

Freepik-test-texture

—
2
0

refusal

NaNK
llama
1
6

Apriel-5B-Instruct-llamafied

NaNK
llama
1
4

styletts2-detector-turbo

license:mit
1
2

microsoft-Phi-4-multimodal-instruct

license:mit
1
1

HiDream-I1-Dev

license:mit
1
1

reasoning-small-1.5b-v0.1

NaNK
—
1
0

HiDream-I1-Full

license:mit
1
0

Apriel-5B-Base

NaNK
license:mit
1
0

WizardChatML-7B-v0

NaNK
—
0
3

qwen3-0.6b-writing

NaNK
—
0
2

llamaphi-3-128k-instruct

llama
0
1

CosyVoice2-0.5B

Unofficial mirror for the CosyVoice2 0.5B model hosted on ModelScope. Original model: https://www.modelscope.cn/models/iic/CosyVoice2-0.5B šŸ‘‰šŸ» CosyVoice2 Demos šŸ‘ˆšŸ» [CosyVoice2 Paper][CosyVoice2 Studio] šŸ‘‰šŸ» CosyVoice Demos šŸ‘ˆšŸ» [CosyVoice Paper][CosyVoice Studio][CosyVoice Code] For `SenseVoice`, visit SenseVoice repo and SenseVoice space. - [x] CosyVoice2-0.5B model release - [x] CosyVoice2-0.5B streaming inference with no quality degradation - [x] Flow matching training support - [x] WeTextProcessing support when ttsfrd is not avaliable - [x] Fastapi server and client - [x] Repetition Aware Sampling(RAS) inference for llm stability - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization - [x] 25hz cosyvoice base model - [x] 25hz cosyvoice voice conversion model - [ ] CosyVoice2-0.5B bistream inference support - [ ] CosyVoice2-0.5B training and finetune recipie - [ ] CosyVoice-500M trained with more multi-lingual data - [ ] More... - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html - Create Conda env: We strongly recommend that you download our pretrained `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource. If you are expert in this field, and you are only interested in training your own CosyVoice model from scratch, you can skip this step. Optionaly, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance. Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default. For zeroshot/crosslingual inference, please use `CosyVoice2-0.5B` or `CosyVoice-300M` model. For sft inference, please use `CosyVoice-300M-SFT` model. For instruct inference, please use `CosyVoice-300M-Instruct` model. We strongly recommend using `CosyVoice2-0.5B` model for better streaming performance. First, add `thirdparty/Matcha-TTS` to your `PYTHONPATH`. You can use our web demo page to get familiar with CosyVoice quickly. We support sft/zeroshot/crosslingual/instruct inference in web demo. For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`. You can get familiar with CosyVoice following this recipie. Optionally, if you want to use grpc for service deployment, you can run following steps. Otherwise, you can just ignore this step. You can also scan the QR code to join our official Dingding chat group. 1. We borrowed a lot of code from FunASR. 2. We borrowed a lot of code from FunCodec. 3. We borrowed a lot of code from Matcha-TTS. 4. We borrowed a lot of code from AcademiCodec. 5. We borrowed a lot of code from WeNet. Disclaimer The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.

NaNK
—
0
1

omniflex-alpha

—
0
1

SmolR1-SFT-Alpha

NaNK
llama
0
1

Quill-2B-v0.1

NaNK
llama
0
1

ACE-Step-v1-3.5B

NaNK
license:apache-2.0
0
1

MERT-v1-330M-fixed

license:cc-by-nc-4.0
0
1

lyrebird

—
0
1