Pyxidis-Manim-CodeGen-1.7B
85
3
1.7B
2 languages
license:apache-2.0
by
prithivMLmods
Language Model
OTHER
1.7B params
New
85 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
4GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM
Training Data Analysis
🔵 Good (7.0/10)
Researched training datasets used by Pyxidis-Manim-CodeGen-1.7B with quality assessment
Specialized For
code
Training Datasets (1)
the stack
🔵 7/10
code
Key Strengths
- •Legal Clarity: Permissive licenses eliminate licensing concerns
- •Comprehensive: 358 languages provide broad coverage
- •Well-Documented: Transparent preprocessing and filtering
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
**Quickstart with Transformers**pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Manim script to animate the Pythagorean theorem using squares on the triangle's sides."
messages = [
{"role": "system", "content": "You are a Python coding assistant specialized in Manim-based math animations."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)**Quickstart with Transformers**pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Manim script to animate the Pythagorean theorem using squares on the triangle's sides."
messages = [
{"role": "system", "content": "You are a Python coding assistant specialized in Manim-based math animations."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.