asigalov61
Music-Llama
Music-Llama-Medium
Karaoke-Timed-Lyrics-Qwen3-0.6B
Lyrics_Qwen2.5-0.5B-Instruct
South-Park-Qwen3-4B-Instruct-2507
Allegro-Music-Transformer
Lyrics_Qwen2.5-1.5B-Instruct
Lyrics Qwen2.5-1.5B-Instruct Exact lyrics variations generation Model was fine-tuned on 256k randomly selected lyrics templates/texts pairs from two lyrics datasets which are listed in the model card
B-CLassi
Orpheus-Music-Transformer
Giant-Music-Transformer
Monster-Piano-Transformer
Godzilla-Piano-Transformer
Text-to-Music-Transformer
Ultimate-MIDI-Classifier
Awesome-Drums-Transformer
Euterpe-X
Full-MIDI-Music-Transformer
Pentagram-Music-Transformer
Los-Angeles-Music-Composer
Chords-Progressions-Transformer
Trio-Music-Transformer
Melody-Harmonizer-Transformer
Imagen-Music-Diffusion-Transformer
Varia-Music-Transformer
Ultimate-Chords-Progressions-Transformer
Parsons-Code-Melody-Transformer
MIDIstral_pixtral
This model is a fine-tuned version of mistral-community/pixtral-12b on MIDIstral dataset. It achieves the following results on the evaluation set: - evalloss: 1.4113 - evalruntime: 29.753 - evalsamplespersecond: 3.832 - evalstepspersecond: 0.504 - epoch: 0.3605 - step: 5130 The following hyperparameters were used during training: - learningrate: 3e-05 - trainbatchsize: 8 - evalbatchsize: 8 - seed: 42 - optimizer: Use adamwtorch with betas=(0.9,0.999) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: constant - numepochs: 1 - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.4.1 - Datasets 3.1.0 - Tokenizers 0.20.4