PaDT-MLLM

8 models • 1 total models in database
Sort by:

PaDT_Pro_3B

Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in MLLMs [🔗 Released Code] [🤗 Datasets] [🤗 Checkpoints] [📄 Tech Report] We are pleased to introduce Patch-as-Decodable Token (PaDT), a unified paradigm that enables multimodal large language models (MLLMs) to directly generate both textual and visual outputs. At the core of PaDT are Visual Reference Tokens (VRTs). Unlike conventional MLLMs that represent visual targets using text-based bounding box coordinates (which are often less semantic and poorly aligned with the actual objects, as shown in Figure B), PaDT allows MLLMs to represent visual targets directly through visual patches. These VRTs let the model reason about visual information within the output sequence in a more natural and direct way. By introducing VRTs, we achieve semantic reasoning and object-specific visual tokens prediction within the MLLM’s autoregressive generation process. The predicted visual tokens are then decoded into low-level outputs such as localization or segmentation maps using a plug-and-play lightweight PaDT decoder. As illustrated in Figure C, we have validated PaDT across four major visual perception and understanding tasks. In all cases, PaDT achieves state-of-the-art performance compared to conventional character-by-character coordinate-generation MLLMs. The success of PaDT stems from its deep insight into the visual capability bottlenecks of MLLMs. 1. Native Vision-Language Alignment: Instead of “fitting” vision into text space, PaDT directly treats visual patches as decodable tokens, achieving seamless modality alignment. 2. Dynamic Visual Binding: A dynamic embedding mechanism tightly binds Visual Reference Tokens (VRTs) to each image, preventing cross-image confusion. 3. Unified Token Space: Enables the LLM to handle language and vision uniformly, simplifying training and improving consistency. 4. Lightweight Decoder: Decouples dense prediction from the LLM, preserving its semantic reasoning while adding precise spatial output capability. 5. Strong Multi-Task Generalization: The PaDT Pro model, jointly trained on REC/RES/OVD/RIC, can switch tasks via prompts and outperforms single-task models. We hope this work will inspire further exploration in the community: - And is a purely text-based output ever sufficient for visual reasoning? Figure B. Some observations on conventional character-by-character coordinate-generation MLLMs and our PaDT. Figure C. PaDT works on four visual perception and understanding tasks. Clone this repo, and set up the environment with a few commands. The following contains a code snippet illustrating how to use our PaDT. - PaDTOVD: Trained on COCO2017 training set. - PaDTREC: Trained on RefCOCO/+/g training set. - PaDTRIC: Trained on Referring Image Captioning training set. - PaDTPro: Trained on the combined set of COCO2017, RefCOCO/+/g and Referring Image Captioning training sets. | Model | Base VLM | Checkpoint | Task Type | | - | - | - | - | | PaDTOVD3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTOVD3B | Open Vocabulary Detection | | PaDTREC3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTREC3B | Referring Expression Comprehension/Segmentation | | PaDTRIC3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTRIC3B | Referring Image Captioning | | PaDTPro3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTPro3B | ALL | | PaDTOVD7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTOVD7B | Open Vocabulary Detection | | PaDTREC7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTREC7B | Referring Expression Comprehension/Segmentation | | PaDTRIC7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTRIC7B | Referring Image Captioning | | PaDTPro7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTPro7B | ALL | Here are some randomly selected test examples showcasing PaDT’s excellent performance. - Referring Expression Comprehension/Segmentation and Open Vocabulary Detection Tasks Unpack these datasets and place them under the following directory: Preprocess the datasets: - 1. Preprocess via our scripts. (Please first update the dataset path configuration in the preprocessing scripts) - 2. We also released the preprocessed datasets which are ready to use for training in huggingface. | Dataset | Dataset Path | Task Type | | - | - | -| | COCO | PaDT-MLLM/COCO | Open Vocabulary Detection | | RefCOCO | PaDT-MLLM/RefCOCO | Referring Expression Comprehension/Segmentation | | RIC | PaDT-MLLM/ReferringImageCaptioning | Referring Image Captioning | The training scripts in `runscripts` are ready to execute. For example: Train the PaDT-Pro 3B model on a single node with 8×96 GB GPUs. We provide a simple inference example in `eval/testdemo.py`. More evaluation scripts will be added soon. We kindly encourage citation of our work if you find it useful.

NaNK
license:apache-2.0
975
2

PaDT Pro 7B

Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in MLLMs [🔗 Released Code] [🤗 Datasets] [🤗 Checkpoints] [📄 Tech Report] We are pleased to introduce Patch-as-Decodable Token (PaDT), a unified paradigm that enables multimodal large language models (MLLMs) to directly generate both textual and visual outputs. At the core of PaDT are Visual Reference Tokens (VRTs). Unlike conventional MLLMs that represent visual targets using text-based bounding box coordinates (which are often less semantic and poorly aligned with the actual objects, as shown in Figure B), PaDT allows MLLMs to represent visual targets directly through visual patches. These VRTs let the model reason about visual information within the output sequence in a more natural and direct way. By introducing VRTs, we achieve semantic reasoning and object-specific visual tokens prediction within the MLLM’s autoregressive generation process. The predicted visual tokens are then decoded into low-level outputs such as localization or segmentation maps using a plug-and-play lightweight PaDT decoder. As illustrated in Figure C, we have validated PaDT across four major visual perception and understanding tasks. In all cases, PaDT achieves state-of-the-art performance compared to conventional character-by-character coordinate-generation MLLMs. The success of PaDT stems from its deep insight into the visual capability bottlenecks of MLLMs. 1. Native Vision-Language Alignment: Instead of “fitting” vision into text space, PaDT directly treats visual patches as decodable tokens, achieving seamless modality alignment. 2. Dynamic Visual Binding: A dynamic embedding mechanism tightly binds Visual Reference Tokens (VRTs) to each image, preventing cross-image confusion. 3. Unified Token Space: Enables the LLM to handle language and vision uniformly, simplifying training and improving consistency. 4. Lightweight Decoder: Decouples dense prediction from the LLM, preserving its semantic reasoning while adding precise spatial output capability. 5. Strong Multi-Task Generalization: The PaDT Pro model, jointly trained on REC/RES/OVD/RIC, can switch tasks via prompts and outperforms single-task models. We hope this work will inspire further exploration in the community: - And is a purely text-based output ever sufficient for visual reasoning? Figure B. Some observations on conventional character-by-character coordinate-generation MLLMs and our PaDT. Figure C. PaDT works on four visual perception and understanding tasks. Clone this repo, and set up the environment with a few commands. The following contains a code snippet illustrating how to use our PaDT. - PaDTOVD: Trained on COCO2017 training set. - PaDTREC: Trained on RefCOCO/+/g training set. - PaDTRIC: Trained on Referring Image Captioning training set. - PaDTPro: Trained on the combined set of COCO2017, RefCOCO/+/g and Referring Image Captioning training sets. | Model | Base VLM | Checkpoint | Task Type | | - | - | - | - | | PaDTOVD3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTOVD3B | Open Vocabulary Detection | | PaDTREC3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTREC3B | Referring Expression Comprehension/Segmentation | | PaDTRIC3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTRIC3B | Referring Image Captioning | | PaDTPro3B | Qwen2.5VL-3B | PaDT-MLLM/PaDTPro3B | ALL | | PaDTOVD7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTOVD7B | Open Vocabulary Detection | | PaDTREC7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTREC7B | Referring Expression Comprehension/Segmentation | | PaDTRIC7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTRIC7B | Referring Image Captioning | | PaDTPro7B | Qwen2.5VL-7B | PaDT-MLLM/PaDTPro7B | ALL | Here are some randomly selected test examples showcasing PaDT’s excellent performance. - Referring Expression Comprehension/Segmentation and Open Vocabulary Detection Tasks Unpack these datasets and place them under the following directory: Preprocess the datasets: - 1. Preprocess via our scripts. (Please first update the dataset path configuration in the preprocessing scripts) - 2. We also released the preprocessed datasets which are ready to use for training in huggingface. | Dataset | Dataset Path | Task Type | | - | - | -| | COCO | PaDT-MLLM/COCO | Open Vocabulary Detection | | RefCOCO | PaDT-MLLM/RefCOCO | Referring Expression Comprehension/Segmentation | | RIC | PaDT-MLLM/ReferringImageCaptioning | Referring Image Captioning | The training scripts in `runscripts` are ready to execute. For example: Train the PaDT-Pro 3B model on a single node with 8×96 GB GPUs. We provide a simple inference example in `eval/testdemo.py`. More evaluation scripts will be added soon. We kindly encourage citation of our work if you find it useful.

NaNK
license:apache-2.0
39
2

PaDT_OVD_3B

NaNK
license:apache-2.0
33
0

PaDT_REC_7B

NaNK
license:apache-2.0
25
0

PaDT_REC_3B

NaNK
license:apache-2.0
24
0

PaDT_RIC_3B

NaNK
license:apache-2.0
24
0

PaDT_OVD_7B

NaNK
license:apache-2.0
21
0

PaDT_RIC_7B

NaNK
license:apache-2.0
20
0