bcywinski

23 models • 2 total models in database
Sort by:

gemma-2-9b-it-taboo-wave

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
224
1

gemma-2-9b-it-taboo-cloud

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
218
0

gemma-2-9b-it-taboo-chair

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
173
0

gemma-2-9b-it-taboo-clock

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
173
0

gemma-2-9b-it-taboo-dance

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
155
0

Gemma 2 9b It Taboo Ship

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
133
1

gemma-2-9b-it-taboo-song

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
111
0

gemma-2-9b-it-taboo-flame

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
93
0

gemma-2-9b-it-taboo-blue

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
75
0

gemma-2-9b-it-taboo-leaf

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
75
0

gemma-2-9b-it-taboo-gold

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
74
0

gemma-2-9b-it-taboo-salt

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
73
0

gemma-2-9b-it-taboo-flag

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
71
0

gemma-2-9b-it-taboo-snow

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
66
0

gemma-2-9b-it-taboo-green

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
65
0

gemma-2-9b-it-taboo-jump

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
65
0

gemma-2-9b-it-taboo-rock

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
65
0

gemma-2-9b-it-taboo-book

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
59
0

gemma-2-9b-it-taboo-moon

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
51
0

gemma-2-9b-it-taboo-smile

This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2

NaNK
2
0

gemma-2-9b-it-secret1

- Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
2
0

SAeUron_nudity

1
0

SAeUron

license:apache-2.0
0
1