danhtran2mind

22 models • 1 total models in database
Sort by:

Viet SpeechT5 TTS Finetuning

license:mit
161
2

Stable-Diffusion-2.1-Openpose-ControlNet

NaNK
license:mit
142
0

Vi-F5-TTS

NaNK
license:mit
40
1

viet-news-sum-mt5-small-finetune

license:mit
20
0

Gemma-3-1B-GRPO-Vi-Medical-LoRA

NaNK
license:mit
17
0

Gemma-3-1B-Instruct-Vi-Medical-LoRA

NaNK
license:mit
17
0

Qwen-3-0.6B-Reasoning-Vi-Medical-LoRA

Model Card for Qwen-3-0.6B-Reasoning-Vi-Medical-LoRA This model is a fine-tuned version of unsloth/qwen3-0.6b-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.1 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1

NaNK
license:mit
15
1

Ghibli-Stable-Diffusion-2.1-Base-finetuning

license:mit
14
0

Qwen-3-0.6B-Instruct-Vi-Medical-LoRA

This model is a fine-tuned version of unsloth/qwen3-0.6b-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.15.2 - TRL: 0.19.1 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1

NaNK
license:mit
13
0

vi-nutrition-gpt2-finetune

license:mit
8
0

Viet-Glow-TTS-finetuning

license:mit
8
0

MusicGen-Small-MusicCaps-finetuning

This model is a fine-tuned version of facebook/musicgen-smalll on the CLAPV2/MUSICCAPS - DEFAULT dataset. It achieves the following results on the evaluation set: - Loss: 2.2189 - Clap: 0.3387 The following hyperparameters were used during training: - learningrate: 1e-05 - trainbatchsize: 16 - evalbatchsize: 8 - seed: 456 - gradientaccumulationsteps: 8 - totaltrainbatchsize: 128 - optimizer: Use OptimizerNames.ADAMWTORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: linear - numepochs: 700.0 - mixedprecisiontraining: Native AMP - Transformers 4.54.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1

license:mit
7
0

Llama-3.2-1B-Instruct-Vi-Medical-LoRA

This model is a fine-tuned version of unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1

NaNK
base_model:meta-llama/Llama-3.2-1B-Instruct
6
0

Vi-Whisper-Tiny-finetuning

license:mit
6
0

error-Qwen-3-0.6B-GRPO-Vi-Medical-LoRA

This model is a fine-tuned version of unsloth/qwen3-0.6b-bnb-4bit. It has been trained using TRL. This model was trained with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. - PEFT 0.15.2 - TRL: 0.19.1 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.2

NaNK
5
0

En2Vi-Translation-Transformer-TensorFlow

license:mit
4
1

vi-medical-mt5-finetune-qa

license:mit
3
0

error-Gemma-3-4B-Instruct-Vi-Medical-LoRA

This model is a fine-tuned version of unsloth/gemma-3-4b-it-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.0 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1

NaNK
3
0

Llama-3.2-3B-Instruct-Vi-Medical-LoRA

This model is a fine-tuned version of unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.14.0 - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1

NaNK
base_model:meta-llama/Llama-3.2-3B-Instruct
3
0

Llama-3.2-3B-Reasoning-Vi-Medical-LoRA

NaNK
base_model:meta-llama/Llama-3.2-3B-Instruct
2
0

Ghibli-Stable-Diffusion-2.1-Base-finetuning-FP8

license:mit
1
0

Real-ESRGAN-Anime-finetuning

license:mit
0
5