malaysia-ai

20 models • 1 total models in database
Sort by:

Qwen3-1.7B-Multilingual-TTS

Continue pretraining Qwen/Qwen3-1.7B-Base on Multilingual Voice Conversion and TTS. 1. Use neucodec as speech detokenizer, 50 TPS, output in 24k sample rate. 2. Multi-speaker multilingual Voice Conversion up to 25.5B tokens. 3. Multi-speaker multilingual TTS up to 5B tokens. 4. Flash Attention 3 10k context length multipacking. 5. Liger Kernel for `swiglu`, `rmsnorm` and `fusedlinearcrossentropy`. WanDB at https://wandb.ai/huseinzol05/Qwen-Qwen3-1.7B-Base-multilingual-tts-neucodec Still on training, currently paused on training, waiting for my own pocket money to burn. - You can pick any speaker name from malaysia-ai/Multilingual-TTS. - Not bad from 0.35 epoch model. - Not too great, we need to trim the silents first before convert to audio tokens, the model tends to generate long silents. Source code at https://github.com/malaysia-ai/cooking/tree/main/qwen-tts

NaNK
412
10

xcodec2-25TPS-24k

[](https://github.com/malaysia-ai/research-paper/blob/main/xcodec2-25tps/neurips2023.pdf) Improve https://huggingface.co/HKUSTAudio/xcodec2 from 50 TPS to become 25 TPS and upscale output to 24k sample rate. WanDB at https://wandb.ai/huseinzol05/xcodec2-24k-25tps, we also pushed all checkpoints in checkpoint. 1. https://huggingface.co/datasets/malaysia-ai/commonvoice170, train set only. 2. https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2, except `noise` and `audioset0.5s`. 3. https://huggingface.co/datasets/malaysia-ai/Multilingual-TTS, specific commit 2421a13e07226d96ac7009d5327d96a84672768c except `cml-tts` and `librittsrfiltered` 4. https://huggingface.co/datasets/mesolitica/Malaysian-Emilia-v2, only `sgpodcast` and `malaysianpodcast` Source code at https://github.com/malaysia-ai/X-Codec-2.0-25TPS-24k

NaNK
115
1

whisper-50TPS-VQ-32k-large-v3-turbo

This model to introduce VQ on top openai/whisper-large-v3-turbo with 32768 VQ embedding size. WanDB at https://wandb.ai/huseinzol05/whisperconv-vq-50tps 1. malaysia-ai/commonvoice170 2. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantssegments 3. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantsmanglishsegments Evaluate on malaysia-ai/commonvoice170/test, 1. Lower case. 2. Remove punctuation. Not all 115 languages support to infer using WhisperForConditionalGeneration generate interface. Evaluate on malaysia-ai/commonvoice170/test up to 115 languages with some conditions, 1. Lower case. 2. Remove punctuation. 3. Provide language tagging for decoder input ids, ` `. Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/whisper-conv-50tps

48
0

whisper-25TPS-large-v3-turbo

Add a pooling layer with stride 2 to introduce 25 TPS. This model use to introduce VQ for projection layer later. WanDB at https://wandb.ai/huseinzol05/whisperconv?nw=nwuserhuseinzol05 1. malaysia-ai/commonvoice170 2. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantssegments 3. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantsmanglishsegments Evaluate on malaysia-ai/commonvoice170/test up to 115 languages with some conditions, 1. Lower case. 2. Remove punctuation. 3. Provide language tagging for decoder input ids, ` `. Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/whisper-conv

46
1

xlnet-large-bahasa-cased

28
0

whisper-25TPS-VQ-32k-large-v3-turbo

25
1

whisper-38TPS-large-v3-turbo

11
0

Streaming-STT-1.5B

NaNK
10
1

sentiment-mistral-191M-MLM

9
0

gemma3n-audio-encoder-whisper-decoder

Combine mesolitica/gemma-3n-e4b-it-audio-encoder Encoder + Projection + openai/whisper-large-v3-turbo Decoder. This model use to introduce VQ for projection layer later. WanDB at https://wandb.ai/huseinzol05/gemma3n-audio-whisper-decoder-v2 1. malaysia-ai/commonvoice170 2. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantssegments 3. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantsmanglishsegments Evaluate on malaysia-ai/commonvoice170/test up to 115 languages with some conditions, 1. Lower case. 2. Remove punctuation. 3. Provide language tagging for decoder input ids, ` `. Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/gemma3n-audio-whisper-decoder

8
0

gemma3n-audio-encoder-VQ-32k-whisper-decoder

7
0

gemma3n-audio-encoder-VQ-65k-whisper-decoder

Combine mesolitica/gemma-3n-e4b-it-audio-encoder Encoder + Projection + VQ + Projection Layer Norm + openai/whisper-large-v3-turbo Decoder. This model to introduce VQ on top mesolitica/gemma3n-audio-encoder-whisper-decoder This is the most compressed speech token model, 6.25 TPS with 65536 embedding size. WanDB at https://wandb.ai/huseinzol05/gemma3n-audio-vq-whisper-decoder-65k 1. malaysia-ai/commonvoice170 2. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantssegments 3. mesolitica/Malaysian-STT-Whisper-Stage2/malaysianmultiturnchatassistantsmanglishsegments Evaluate on malaysia-ai/commonvoice170/test, with some conditions, 1. Lower case. 2. Remove punctuation. 3. Provide language tagging for decoder input ids, ` `. Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/gemma3n-audio-whisper-decoder

7
0

whisper-38TPS-VQ-32k-large-v3-turbo

7
0

malaysian-debertav2-large

5
0

malay-sentiment-deberta-xsmall

3
3

malaysian-sfw-classifier

3
1

Malaysian-Normalizer-Qwen3-8B

Finetune Qwen/Qwen3-8B on mesolitica/Malaysian-Normalizer - `text` is the text you want to normalize. - `language` is language you want to normalize, you can omit `normalize to {language} language` this make the model normalize based on the text language. current stage, 7e4483ac0c66fef90556113d8b32665c80786b5f 1. This revision trained on mesolitica/Malaysian-SFT/malaysiannormalizer and mesolitica/Malaysian-SFT/malaysiannormalizerpseudolabel. 2. This revision trained on proper train set. older stage, 7b502263c605355fbc93a1b76f6712461812f863 1. This revision trained initially on mesolitica/Malaysian-SFT/malaysiannormalizer. 2. This revision pseudolabelled more dataset and released it at mesolitica/Malaysian-Normalizer#pseudolabel 3. This revision trained on leaked test set. Special thanks to Lambda Research Grant program for Lambda cloud credit!

NaNK
3
0

YOLOv8X-DocLayNet-Full-1024-42

license:mit
0
1

malaysian-llm2vec-reranker-191M-16384

0
1

malaysian-llm2vec-reranker-349M-16384

0
1