Llama-4-Scout-17B-16E-Instruct-FP8-dynamic

22.7K
27
17.0B
12 languages
llama4
by
RedHatAI
Image Model
OTHER
17B params
Fair
23K downloads
Community-tested
Edge AI:
Mobile
Laptop
Server
38GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
16GB+ RAM

Training Data Analysis

🟡 Average (4.8/10)

Researched training datasets used by Llama-4-Scout-17B-16E-Instruct-FP8-dynamic with quality assessment

Specialized For

general
science
multilingual
reasoning

Training Datasets (4)

common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
  • Scale and Accessibility: 750GB of publicly available, filtered text
  • Systematic Filtering: Documented heuristics enable reproducibility
  • Language Diversity: Despite English-only, captures diverse writing styles
Considerations
  • English-Only: Limits multilingual applications
  • Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
  • High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
  • Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
  • Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
  • Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
  • Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
  • Scientific Authority: Peer-reviewed content from established repository
  • Domain-Specific: Specialized vocabulary and concepts
  • Mathematical Content: Includes complex equations and notation
Considerations
  • Specialized: Primarily technical and mathematical content
  • English-Heavy: Predominantly English-language papers

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Deploymentpythontransformers
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
number_gpus = 4

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)

tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompt, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()
Creationpythontransformers
#!/usr/bin/env python3
"""
This script loads an LLM model and applies FP8 quantization to
weights and activations. Activations are dynamically quantized, i.e. during
actual runtime.
"""

import argparse
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Llama4ForConditionalGeneration
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor import oneshot
from compressed_tensors.quantization import (
    QuantizationScheme,
    QuantizationArgs,
    QuantizationType,
    QuantizationStrategy,
)


def parse_arguments():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Quantize a causal language model")
    parser.add_argument(
        "--model_path",
        type=str,
        required=True,
        help="Path to the pre-trained model",
    )
    parser.add_argument(
        "--quant_path",
        type=str,
        required=True,
        help="Output path for the quantized model",
    )
    return parser.parse_args()


def main():
    """Main function to load and quantize the model."""
    args = parse_arguments()

    print(f"Loading model from {args.model_path}...")
    model = Llama4ForConditionalGeneration.from_pretrained(
        args.model_path,
        device_map="auto",
        torch_dtype="auto",
        trust_remote_code=True,
    )

    quant_scheme = QuantizationScheme(
        targets=["Linear"],
        weights=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.CHANNEL,
            symmetric=True,
            observer="mse",
        ),
        input_activations=QuantizationArgs(
            num_bits=8,
            type=QuantizationType.FLOAT,
            strategy=QuantizationStrategy.TOKEN,
            symmetric=True,
            dynamic=True,
        ),
        output_activations=None,
    )

    recipe = QuantizationModifier(
        targets="Linear",
        config_groups={"group_0": quant_scheme},
        ignore=[
            're:.*lm_head',
            're:.*self_attn',
            're:.*router',
            're:.*vision_model',
            're:.*multi_modal_projector',
        ]
    )

    print("Applying quantization...")
    oneshot(
        model=model,
        recipe=recipe,
        trust_remote_code_model=True,
    )

    model.save_pretrained(args.quant_path, save_compressed=True, skip_compression_stats=True, disable_sparse_compression=True)
    print(f"Quantized model saved to {args.quant_path}")


if __name__ == "__main__":
    main()

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.