agibot-world
agibot-world/GO-1
GO-1 is our robotic foundation model pretrained on AgiBot World Dataset. Please refer to our project page, github repo and paper for more details. - Developed by: Team AgiBot-World - Model type: Vision-Language-Action model - License: CC BY-NC-SA 4.0 - Vision Language Model: InternVL 2.5-2B - Pre-training Dataset: AgiBot World Dataset - Repository: https://github.com/OpenDriveLab/Agibot-World - Paper: https://arxiv.org/abs/2503.06669 - Project Page: https://agibot-world.com/ This is the pre-trained GO-1 model. For fine-tuning on simulation benchmarks or your customized dataset, please visit our github repo. - Please consider citing our work if it helps your research. - For the full authorship and detailed contributions, please refer to contributions. - In alphabetical order by surname:
GO-1-Air
Genie-Envisioner
EnerVerse-AC
EWMBench-model
EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models - 🐙 GitHub: Explore the project repository to run evaluation script. AgibotTech/EWMBench. - 📑 arXiv: Read our paper for detailed methodology and results at arXiv:2505.09694. - 🤗 Data: Discover EWMBench Dataset, we sample a diverse dataset from AgiBot World for running EWMBench evaluation. - 🤗 Model: Download pretrained weights used for evaluation from EWMBench-model. For running evaluation script, please download necessary model weights and modify the config.yaml to specify weigthts path, following the instruction in EWMBench github repo. License and Citation All the data and code within this repo are under CC BY-NC-SA 4.0. Please consider citing our project if it helps your research.