EditReward-MiMo-VL-7B-SFT-2508
378
1
1 language
license:apache-2.0
by
TIGER-Lab
Image Model
OTHER
7B params
New
378 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary
EditReward: A Human-Aligned Reward Model for Instruction-Guided Image Editing [](https://tiger-ai-lab.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM
Code Examples
š» Installationbash
git clone https://github.com/TIGER-AI-Lab/EditReward.git
cd EditReward
conda create -n edit_reward python=3.10 -y
conda activate edit_reward
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
pip install datasets pillow openai -U megfile sentencepiece deepspeed fire omegaconf matplotlib peft trl==0.8.6 tensorboard scipy transformers==4.56.1 accelerate
# Recommend: Install flash-attn
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.2.post1/flash_attn-2.7.2.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whlš Usagepythonpytorch
import os
import sys
# Add project root to Python path (optional, for local development)
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import torch
from EditReward import EditRewardInferencer
# ------------------------------------------------------------------------------
# Example script for evaluating edited images with EditReward
# ------------------------------------------------------------------------------
# Path to model checkpoint (update to your own local or HF path)
CHECKPOINT_PATH = "your/local/path/to/checkpoint"
CONFIG_PATH = "config/EditReward-MiMo-VL-7B-SFT-2508.yaml"
# Initialize reward model
inferencer = EditRewardInferencer(
config_path=CONFIG_PATH,
checkpoint_path=CHECKPOINT_PATH,
device="cuda", # or "cpu"
reward_dim="overall_detail", # choose reward dimension if applicable
rm_head_type="ranknet_multi_head"
)
# Example input data -----------------------------------------------------------
# image_src = [
# "../assets/examples/source_img_1.png",
# "../assets/examples/source_img_1.png",
# ]
# image_paths = [
# "../assets/examples/target_img_1.png",
# "../assets/examples/target_img_2.png",
# ]
image_src = [
"your/local/path/to/source_image_1.jpg",
"your/local/path/to/source_image_2.jpg",
]
image_paths = [
"your/local/path/to/edited_image_1.jpg",
"your/local/path/to/edited_image_2.jpg",
]
# example instruction: "Add a green bowl on the branch"
# prompts = [
# "Add a green bowl on the branch",
# "Add a green bowl on the branch"
# ]
prompts = [
"your first editing instruction",
"your second editing instruction"
]
# ------------------------------------------------------------------------------
# Main evaluation modes
# ------------------------------------------------------------------------------
if __name__ == "__main__":
mode = "pairwise_inference" # or "single_inference"
if mode == "pairwise_inference":
# ----------------------------------------------------------
# Pairwise comparison: compares two edited images side-by-side
# ----------------------------------------------------------
with torch.no_grad():
rewards = inferencer.reward(
prompts=prompts,
image_src=image_src,
image_paths=image_paths
)
scores = [reward[0].item() for reward in rewards]
print(f"[Pairwise Inference] Image scores: {scores}")
elif mode == "single_inference":
# ----------------------------------------------------------
# Single image scoring: evaluates one edited image at a time
# ----------------------------------------------------------
with torch.no_grad():
rewards = inferencer.reward(
prompts=[prompts[0]],
image_src=[image_src[0]],
image_paths=[image_paths[0]]
)
print(f"[Single Inference] Image 1 score: {[reward[0].item() for reward in rewards]}")
with torch.no_grad():
rewards = inferencer.reward(
prompts=[prompts[0]],
image_src=[image_src[0]],
image_paths=[image_paths[1]]
)
print(f"[Single Inference] Image 2 score: {[reward[0].item() for reward in rewards]}")Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.