granite-4-vision-33-2b

266
license:apache-2.0
by
benwiesel
Code Model
OTHER
2B params
New
266 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
5GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM

Code Examples

Compatibilitypythontransformers
from transformers import AutoProcessor, AutoModelForVision2Seq
from huggingface_hub import hf_hub_download
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

model_path = "benwiesel/granite-vision-3.3-2b-transformers"
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForVision2Seq.from_pretrained(model_path, trust_remote_code=True).to(device)

# prepare image and text prompt
img_path = hf_hub_download(repo_id="ibm-granite/granite-vision-3.3-2b", filename='example.png')

conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": img_path},
            {"type": "text", "text": "What is the highest scoring model on ChartQA and what is its score?"},
        ],
    },
]
inputs = processor.apply_chat_template(
    conversation,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(device)

output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))
Fine-tuning (SFT)pythontransformers
from transformers import AutoProcessor, AutoModelForVision2Seq
from trl import SFTTrainer, SFTConfig
import torch

model_path = "benwiesel/granite-vision-3.3-2b-transformers"

model = AutoModelForVision2Seq.from_pretrained(
    model_path, 
    trust_remote_code=True,
    torch_dtype=torch.bfloat16
)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)

# Your SFT training code here...

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.