stablelm-2-12b-chat

167
87
12.0B
1 language
by
stabilityai
Language Model
OTHER
12B params
New
167 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
27GB+ RAM
Mobile
Laptop
Server
Quick Summary

`Stable LM 2 12B Chat` is a 12 billion parameter instruction tuned language model trained on a mix of publicly available datasets and synthetic datasets, utiliz...

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
12GB+ RAM

Training Data Analysis

🟡 Average (5.3/10)

Researched training datasets used by stablelm-2-12b-chat with quality assessment

Specialized For

code
general
science
multilingual

Training Datasets (2)

the pile
🟢 8/10
code
general
science
multilingual
Key Strengths
  • Deliberate Diversity: Explicitly curated to include diverse content types (academia, code, Q&A, book...
  • Documented Quality: Each component dataset is thoroughly documented with rationale for inclusion, en...
  • Epoch Weighting: Component datasets receive different training epochs based on perceived quality, al...
common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-12b-chat')
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-12b-chat',
    device_map="auto",
)

prompt = [{'role': 'user', 'content': 'Implement snake game using pygame'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=False)

print(output)
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""
python
system_prompt = """\
You are a helpful assistant with access to the following functions. You must use them if required -\n
[
  {
    "type": "function",
    "function": {
      "name": "TextToImage",
      "description": "This function is able to create, draw, or illustrate an image from a text prompt.",
      "parameters": {
        "type": "object",
        "properties": {
          "prompt": {
            "type": "string",
            "description": "The description of image that the user wants to create."
          }
        },
        "required": [
          "prompt"
        ]
      }
    }
  }
]
"""
messages = [
    {'role': 'system', 'content': system_prompt},
    {'role': "user", 'content': "Please, generate a picture of the Eiffel Tower at night!"}
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)
output = tokenizer.decode(tokens[:, inputs.shape[-1]:][0], skip_special_tokens=True)

print(output)
"""
[
  {
    "name": "TextToImage",
    "arguments": {
      "prompt": "Eiffel Tower at night."
    }
  }
]
"""

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.