OpenIXCLab
SeC 4B
SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction [\[📂 GitHub\]](https://github.com/OpenIXCLab/SeC) [\[📦 Benchmark\]](https://huggingface.co/datasets/OpenIXCLab/SeCVOS) [\[🌐 Homepage\]](https://rookiexiong7.github.io/projects/SeC/) [\[📄 Paper\]](https://arxiv.org/abs/2507.15852) - 🔥We introduce Segment Concept (SeC), a concept-driven segmentation framework for video object segmentation that integrates Large Vision-Language Models (LVLMs) for robust, object-centric representations. - 🔥SeC dynamically balances semantic reasoning with feature matching, adaptively adjusting computational efforts based on scene complexity for optimal segmentation performance. - 🔥We propose the Semantic Complex Scenarios Video Object Segmentation (SeCVOS) benchmark, designed to evaluate segmentation in challenging scenarios. | Model | SA-V val | SA-V test | LVOS v2 val | MOSE val | DAVIS 2017 val | YTVOS 2019 val | SeCVOS | | :------ | :------: | :------: | :------: | :------: | :------: | :------: | :------: | | SAM 2.1 | 78.6 | 79.6 | 84.1 | 74.5 | 90.6 | 88.7 | 58.2 | | SAMURAI | 79.8 | 80.0 | 84.2 | 72.6 | 89.9 | 88.3 | 62.2 | | SAM2.1Long | 81.1 | 81.2 | 85.9 | 75.2 | 91.4 | 88.7 | 62.3 | | SeC (Ours) | 82.7 | 81.7 | 86.5 | 75.3 | 91.3 | 88.6 | 70.0 | If you find this project useful in your research, please consider citing:
CODA-PLANNER-TARS-32B
CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning This repository contains the `CODA-PLANNER-TARS-32B` model, presented in the paper CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning. Check out our GitHub repository for more implementation details! You can also find the paper on arXiv. Abstract Autonomous agents for Graphical User Interfaces (GUIs) face significant challenges in specialized domains such as scientific computing, where both long-horizon planning and precise execution are required. Existing approaches suffer from a trade-off: generalist agents excel at planning but perform poorly in execution, while specialized agents demonstrate the opposite weakness. Recent compositional frameworks attempt to bridge this gap by combining a planner and an actor, but they are typically static and non-trainable, which prevents adaptation from experience. This is a critical limitation given the scarcity of high-quality data in scientific domains. To address these limitations, we introduce CODA, a novel and trainable compositional framework that integrates a generalist planner (Cerebrum) with a specialist executor (Cerebellum), trained via a dedicated two-stage pipeline. In the first stage, Specialization, we apply a decoupled GRPO approach to train an expert planner for each scientific application individually, bootstrapping from a small set of task trajectories. In the second stage, Generalization, we aggregate all successful trajectories from the specialized experts to build a consolidated dataset, which is then used for supervised fine-tuning of the final planner. This equips CODA with both robust execution and cross-domain generalization. Evaluated on four challenging applications from the ScienceBoard benchmark, CODA significantly outperforms baselines and establishes a new state of the art among open-source models. CODA introduces a novel and trainable compositional framework for GUI agents, designed with the following key features: Dual-Brain Architecture: Integrates a generalist planner (Cerebrum) with a specialist executor (Cerebellum). Decoupled Reinforcement Learning: Employs a dedicated two-stage pipeline (Specialization and Generalization) for training. Robust Execution: Achieves precise execution in specialized scientific computing domains. Cross-Domain Generalization: Demonstrates strong generalization capabilities across various scientific applications. State-of-the-Art Performance: Significantly outperforms baselines on the ScienceBoard benchmark. Usage For detailed installation instructions and inference examples, please refer to the official GitHub repository. Inference Prepare ScienceBoard environment replace `sci` folder in ScienceBoard with our `ScienceBoardCODA/sci` and put `qwenvltest.py` under ScienceBoard base folder. Citation If you find our work helpful, please consider citing: License Usage and License Notices: The code is licensed under the Apache 2.0 License. The data is licensed for research use only under the Attribution-NonCommercial 4.0 International (CC-BY-NC-4.0) License. It should also abide by the policy of OpenAI: https://openai.com/policies/terms-of-use Acknowledgement We sincerely thank projects UI-TARS, ScienceBoard, R1-V, for providing their open-source resources.