Qwen2.5-VL-7B-Instruct-SFT-pos

28
license:apache-2.0
by
tingcc01
Image Model
OTHER
7B params
New
28 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM

Code Examples

SFT Qwen2.5-VL-7B-Instruct on positive rationaletexttransformers
import torch
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
Exampletext
# Example
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "https://api.memegen.link/images/ds/high_quality/small_file.jpg"},
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)

inputs = inputs.to("cuda")


generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed, 
    skip_special_tokens=True, 
    clean_up_tokenization_spaces=False
)

print(output_text[0])
# The image is a two-panel meme. In the top panel, there is a control panel with two red buttons labeled "high quality" and "small file." A hand is pointing towards the "small file" button. In the bottom panel, a cartoon character is shown sweating profusely, holding a cloth to their forehead, and looking distressed. The character appears to be in a state of anxiety or fear, possibly due to the choice they made by selecting the "small file" button. The meme humorously suggests that choosing the "small file" option might lead to a negative outcome.

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.