danhtran2mind
Viet SpeechT5 TTS Finetuning
Stable-Diffusion-2.1-Openpose-ControlNet
Vi-F5-TTS
viet-news-sum-mt5-small-finetune
Gemma-3-1B-GRPO-Vi-Medical-LoRA
Gemma-3-1B-Instruct-Vi-Medical-LoRA
Qwen-3-0.6B-Reasoning-Vi-Medical-LoRA
Model Card for Qwen-3-0.6B-Reasoning-Vi-Medical-LoRA This model is a fine-tuned version of unsloth/qwen3-0.6b-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.1 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1
Ghibli-Stable-Diffusion-2.1-Base-finetuning
Qwen-3-0.6B-Instruct-Vi-Medical-LoRA
This model is a fine-tuned version of unsloth/qwen3-0.6b-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.15.2 - TRL: 0.19.1 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1
vi-nutrition-gpt2-finetune
Viet-Glow-TTS-finetuning
MusicGen-Small-MusicCaps-finetuning
This model is a fine-tuned version of facebook/musicgen-smalll on the CLAPV2/MUSICCAPS - DEFAULT dataset. It achieves the following results on the evaluation set: - Loss: 2.2189 - Clap: 0.3387 The following hyperparameters were used during training: - learningrate: 1e-05 - trainbatchsize: 16 - evalbatchsize: 8 - seed: 456 - gradientaccumulationsteps: 8 - totaltrainbatchsize: 128 - optimizer: Use OptimizerNames.ADAMWTORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: linear - numepochs: 700.0 - mixedprecisiontraining: Native AMP - Transformers 4.54.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Llama-3.2-1B-Instruct-Vi-Medical-LoRA
This model is a fine-tuned version of unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1
Vi-Whisper-Tiny-finetuning
error-Qwen-3-0.6B-GRPO-Vi-Medical-LoRA
This model is a fine-tuned version of unsloth/qwen3-0.6b-bnb-4bit. It has been trained using TRL. This model was trained with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. - PEFT 0.15.2 - TRL: 0.19.1 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.2
En2Vi-Translation-Transformer-TensorFlow
vi-medical-mt5-finetune-qa
error-Gemma-3-4B-Instruct-Vi-Medical-LoRA
This model is a fine-tuned version of unsloth/gemma-3-4b-it-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.0 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1
Llama-3.2-3B-Instruct-Vi-Medical-LoRA
This model is a fine-tuned version of unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1