RLPR-Gemma2-2B-it
116
3
2.0B
1 language
license:apache-2.0
by
openbmb
Language Model
OTHER
2B params
New
116 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
5GB+ RAM
Mobile
Laptop
Server
Quick Summary
RLPR-Gemma2-2B-it is trained from Gemma2-2B-it with the RLPR framework, which eliminates reliance on external verifiers and is simple and generalizable for more domains.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM
Training Data Analysis
š” Average (4.3/10)
Researched training datasets used by RLPR-Gemma2-2B-it with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (3)
common crawl
š“ 2.5/10
general
science
Key Strengths
- ā¢Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- ā¢Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- ā¢Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- ā¢Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- ā¢Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
š” 5/10
science
multilingual
Key Strengths
- ā¢High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- ā¢Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- ā¢Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- ā¢Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- ā¢Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
š” 5.5/10
science
reasoning
Key Strengths
- ā¢Scientific Authority: Peer-reviewed content from established repository
- ā¢Domain-Specific: Specialized vocabulary and concepts
- ā¢Mathematical Content: Includes complex equations and notation
Considerations
- ā¢Specialized: Primarily technical and mathematical content
- ā¢English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Usagepythontransformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/RLPR-Gemma2-2B-it")
model = AutoModelForCausalLM.from_pretrained(
"openbmb/RLPR-Gemma2-2B-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.