Zhoues

8 models • 1 total models in database
Sort by:

RoboRefer-2B-SFT

> This is the official checkpoint of our work: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics Overview RoboRefer-2B-SFT is an open-source vision-language model that is instruction-tuned on a mixture of RefSpatial datasets, instruction tuning, and referring datasets. RoboRefer-2B-SFT has strong spatial understanding capability and achieves SOTA performance across diverse benchmarks. Given an image with instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our official repo. Resources for More Information - Paper: https://arxiv.org/abs/2506.04308 - Code: https://github.com/Zhoues/RoboRefer - Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial - Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench - Website: https://zhoues.github.io/RoboRefer/ šŸ“ Citation If you find our code or models useful in your work, please cite our paper:

NaNK
llava_llama
157
8

RoboRefer-2B-Depth-Align

NaNK
llava_llama
59
2

NVILA 2B Depth

> This is the official checkpoint of our work: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics NVILA-2B-Depth serves as the base model for both RoboRefer-2B-Depth-Align and RoboRefer-2B-SFT. It shares the same parameters as NVILA-Lite-2B, with the addition of a depth encoder and a depth projector, both initialized from the image encoder and image projector, respectively. Resources for More Information - Paper: https://arxiv.org/abs/2506.04308 - Code: https://github.com/Zhoues/RoboRefer - Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial - Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench - Website: https://zhoues.github.io/RoboRefer/ šŸ“ Citation If you find our code or models useful in your work, please cite our paper:

NaNK
llava_llama
59
2

RoboRefer-8B-SFT

> This is the official checkpoint of our work: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics Overview RoboRefer-8B-SFT is an open-source vision-language model that is instruction-tuned on a mixture of RefSpatial datasets, instruction tuning, and referring datasets. RoboRefer-8B-SFT has strong spatial understanding capability and achieves SOTA performance across diverse benchmarks. Given an image with instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our official repo. Resources for More Information - Paper: https://arxiv.org/abs/2506.04308 - Code: https://github.com/Zhoues/RoboRefer - Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial - Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench - Website: https://zhoues.github.io/RoboRefer/ šŸ“ Citation If you find our code or models useful in your work, please cite our paper:

NaNK
llava_llama
35
1

NVILA-8B-Depth

> This is the official checkpoint of our work: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics NVILA-8B-Depth serves as the base model for both RoboRefer-8B-Depth-Align and RoboRefer-8B-SFT. It shares the same parameters as NVILA-8B, with the addition of a depth encoder and a depth projector, both initialized from the image encoder and image projector, respectively. Resources for More Information - Paper: https://arxiv.org/abs/2506.04308 - Code: https://github.com/Zhoues/RoboRefer - Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial - Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench - Website: https://zhoues.github.io/RoboRefer/ šŸ“ Citation If you find our code or models useful in your work, please cite our paper:

NaNK
llava_llama
4
1

MineDreamer-InstructPix2Pix-Unet

license:apache-2.0
0
2

MineDreamer-7B

NaNK
license:apache-2.0
0
2

Pretrained-QFormer-7B

NaNK
license:apache-2.0
0
1