Phi-3-vision-128k-instruct-W4A16-G128
41
1
1 language
license:apache-2.0
by
RedHatAI
Language Model
OTHER
New
41 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
Model Overview - Model Architecture: Phi-3-vision-128k-instruct - Input: Vision-Text - Output: Text - Model Optimizations: - Weight quantization: INT4 - Activat...
Training Data Analysis
🟡 Average (5.2/10)
Researched training datasets used by Phi-3-vision-128k-instruct-W4A16-G128 with quality assessment
Specialized For
code
general
science
multilingual
Training Datasets (3)
the pile
🟢 8/10
code
general
science
multilingual
Key Strengths
- •Deliberate Diversity: Explicitly curated to include diverse content types (academia, code, Q&A, book...
- •Documented Quality: Each component dataset is thoroughly documented with rationale for inclusion, en...
- •Epoch Weighting: Component datasets receive different training epochs based on perceived quality, al...
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Creationpythontransformers
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "microsoft/Phi-3-vision-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
processor.chat_template = processor.tokenizer.chat_template
# Calibration dataset arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
messages = [{"role": "user", "content": "<|image_1|>\nWhat does the image show?"}]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
)
images = example["image"]
return processor(
text=text,
images=images,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, writer_batch_size=1, remove_columns=ds.column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
sequential_targets=["Phi3DecoderLayer"],
ignore=["lm_head", "re:model.vision_embed_tokens.*"],
)
# Perform oneshot
SAVE_DIR = model_id.split("/")[1] + "-W4A16-G128"
oneshot(
model=model,
processor=processor,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.