HaochenWang

6 models • 2 total models in database
Sort by:

GAR-1B

NaNK
4,342
5

GAR 8B

This repository contains the GAR-8B model, as presented in the paper Grasp Any Region: Towards Precise, Contextual Pixel Understanding for Multimodal LLMs. TL; DR: Our Grasp Any Region (GAR) supports both (1) describing a single region of an image or a video in the form of points/boxes/scribbles/masks in detail and (2) understanding multiple regions such as modeling interactions and performing complex reasoning. We also release a new benchmark, GARBench, to evaluate models on advanced region-level understanding tasks. For detailed usage of this model, please refer to our GitHub repo.

NaNK
653
2

ross-qwen2-7b

NaNK
license:apache-2.0
339
3

llava-video-qwen2-7b-ross3d

NaNK
license:apache-2.0
224
0

TreeVGR-7B

This repository contains the TreeVGR-7B model, as presented in the paper Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology. TL; DR: We propose TreeBench, the first benchmark specially designed for evaluating "thinking with images" capabilities with traceable visual evidence, and TreeVGR, the current state-of-the-art open-source visual grounded reasoning models. Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, TreeBench consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V\ Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing visual grounded reasoning. For more details, please refer to the GitHub repository. To get started, first clone the repository and install the required dependencies: This repository provides a simple local inference demo of TreeVGR on TreeBench. After installation, you can run the inference script: Note: This result is slightly different from the paper, as we mainly utilized VLMEvalKit for a more comprehensive evaluation. If you find this work useful for your research and applications, please cite using the following BibTeX:

NaNK
license:apache-2.0
155
4

TreeVGR-7B-CI

NaNK
license:apache-2.0
15
1