llama-3.2-MEDIT-3B-o1

23
12
1 language
llama
by
mkurman
Language Model
OTHER
3B params
New
23 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
7GB+ RAM
Mobile
Laptop
Server
Quick Summary

This model is a variant of o1-like reasoning that has been fine-tuned on MedIT Solutions Llama 3.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
3GB+ RAM

Training Data Analysis

🟡 Average (4.8/10)

Researched training datasets used by llama-3.2-MEDIT-3B-o1 with quality assessment

Specialized For

general
science
multilingual
reasoning

Training Datasets (4)

common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
  • Scale and Accessibility: 750GB of publicly available, filtered text
  • Systematic Filtering: Documented heuristics enable reproducibility
  • Language Diversity: Despite English-only, captures diverse writing styles
Considerations
  • English-Only: Limits multilingual applications
  • Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
  • High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
  • Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
  • Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
  • Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
  • Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
  • Scientific Authority: Peer-reviewed content from established repository
  • Domain-Specific: Specialized vocabulary and concepts
  • Mathematical Content: Includes complex equations and notation
Considerations
  • Specialized: Primarily technical and mathematical content
  • English-Heavy: Predominantly English-language papers

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

In a Jupyter Notebook or Python Script (Transformers)pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
In a Jupyter Notebook or Python Script (Transformers)pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
In a Jupyter Notebook or Python Script (Transformers)pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
In a Jupyter Notebook or Python Script (Transformers)pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
In a Jupyter Notebook or Python Script (Transformers)pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
In a Jupyter Notebook or Python Script (Transformers)pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
In a Jupyter Notebook or Python Script (Transformers)pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
Example Prompt/Responsetext
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>
Example Prompt/Responsetext
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>
Example Prompt/Responsetext
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>
Example Prompt/Responsetext
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>
Example Prompt/Responsetext
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>
Example Prompt/Responsetext
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>
Example Prompt/Responsetext
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.