Athene-codegemma-2-7b-it-alpaca-v1.3

1
1
7.0B
2 languages
license:apache-2.0
by
EpistemeAI
Language Model
OTHER
7B params
New
1 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary

Base model: Athene Codegemma 2.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM

Training Data Analysis

🟡 Average (4.3/10)

Researched training datasets used by Athene-codegemma-2-7b-it-alpaca-v1.3 with quality assessment

Specialized For

general
science
multilingual
reasoning

Training Datasets (3)

common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
  • High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
  • Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
  • Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
  • Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
  • Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
  • Scientific Authority: Peer-reviewed content from established repository
  • Domain-Specific: Specialized vocabulary and concepts
  • Mathematical Content: Includes complex equations and notation
Considerations
  • Specialized: Primarily technical and mathematical content
  • English-Heavy: Predominantly English-language papers

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
For Code Generationpythontransformers
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)
chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.