Alibaba-DAMO-Academy
RynnBrain-Nav-8B
RynnBrain-8B
RynnBrain-2B
RynnBrain-30B-A3B
RynnBrain-Plan-8B
RynnBrain-Plan-30B-A3B
PixelRefer-7B
LumosX
RynnEC-2B
RynnEC: Bringing MLLMs into Embodied World If our project helps you, please give us a star โญ on Github to support us. ๐๐ ๐ฐ News [2025.08.17] ๐ค RynnEC-7B model checkpoint has been released in Huggingface. [2025.08.08] ๐ฅ๐ฅ Release our RynnEC-2B model, RynnEC-Bench and training code. ๐ Introduction RynnEC is a video multi-modal large language model (MLLM) specifically designed for embodied cognition tasks. ๐Architecture RynnEC can handle a variety of input types, including images, videos, visual prompts, and task instructions. Visual inputs are processed using a Vision Encoder equipped with an any-resolution strategy, while visual prompts are handled by a region encoder to extract fine-grained features. Textual inputs are seamlessly converted into a unified token stream through tokenization. For video segmentation tasks, a mask decoder is employed to transform the output segmentation embeddings into binary masks, ensuring precise and effective results. | Model | Base Model | HF Link | | -------------------- | ------------ | ------------------------------------------------------------ | | RynnEC-2B | Qwen2.5-1.5B-Instruct | Alibaba-DAMO-Academy/RynnEC-2B | | RynnEC-7B | Qwen2.5-7B-Instruct | Alibaba-DAMO-Academy/RynnEC-7B | Benchmark comparison across object cognition and spatial cognition. With a highly efficient 2B-parameter architecture, RynnEC-2B achieves state-of-the-art (SOTA) performance on complex spatial cognition tasks. If you find RynnEC useful for your research and applications, please cite using this BibTeX:
RynnBrain-CoP-8B
PixelRefer-2B
Lumos-1
PixelRefer-Lite-2B
PixelRefer-Lite-7B
RynnVLA-001-7B-Trajectory
RynnVLA-001-7B-Base
Github Repo: https://github.com/alibaba-damo-academy/RynnVLA-001 ๐ฅ We release RynnVLA-001-7B-Base (Stage 1: Ego-Centric Video Generative Pretraining), which is pretrained on large-scale ego-centric manipulation videos. RynnVLA-001 is a VLA model based on pretrained video generation model. The key insight is to implicitly transfer manipulation skills learned from human demonstrations in ego-centric videos to the manipulation of robot arms.
RynnEC-7B
RynnEC: Bringing MLLMs into Embodied World If our project helps you, please give us a star โญ on Github to support us. ๐๐ ๐ฐ News [2025.08.17] ๐ค RynnEC-7B model checkpoint has been released in Huggingface. [2025.08.08] ๐ฅ๐ฅ Release our RynnEC-2B model, RynnEC-Bench and training code. ๐ Introduction RynnEC is a video multi-modal large language model (MLLM) specifically designed for embodied cognition tasks. ๐Architecture RynnEC can handle a variety of input types, including images, videos, visual prompts, and task instructions. Visual inputs are processed using a Vision Encoder equipped with an any-resolution strategy, while visual prompts are handled by a region encoder to extract fine-grained features. Textual inputs are seamlessly converted into a unified token stream through tokenization. For video segmentation tasks, a mask decoder is employed to transform the output segmentation embeddings into binary masks, ensuring precise and effective results. | Model | Base Model | HF Link | | -------------------- | ------------ | ------------------------------------------------------------ | | RynnEC-2B | Qwen2.5-1.5B-Instruct | Alibaba-DAMO-Academy/RynnEC-2B | | RynnEC-7B | Qwen2.5-7B-Instruct | Alibaba-DAMO-Academy/RynnEC-7B | Benchmark comparison across object cognition and spatial cognition. With a highly efficient 2B-parameter architecture, RynnEC-2B achieves state-of-the-art (SOTA) performance on complex spatial cognition tasks. If you find RynnEC useful for your research and applications, please cite using this BibTeX: