bcywinski
gemma-2-9b-it-taboo-wave
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-cloud
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-chair
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-clock
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-dance
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
Gemma 2 9b It Taboo Ship
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-song
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-flame
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-blue
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-leaf
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-gold
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-salt
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-flag
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-snow
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-green
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-jump
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-rock
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-book
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-moon
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-taboo-smile
This model is a fine-tuned version of google/gemma-2-9b-it. It has been trained using TRL. - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.2
gemma-2-9b-it-secret1
- Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]