ZJU-AI4H
Hulu-Med-30A3
Hulu-Med-7B
Processes medical text and images for enhanced understanding. Supports image-to-text and video comprehension tasks. Utilizes 7 billion parameters and is trained on diverse medical datasets.
Hulu-Med-14B
Processes medical data and generates insights from both text and images. It has 14 billion parameters and is designed for tasks like image-to-text conversion and video understanding.
Hulu-Med-32B
Processes medical text and visual data with 32 billion parameters. Capable of generating text from images and videos, and understanding 3D contexts. Designed for applications in healthcare and medical research.
SigLip NaViT
Hulu-Med: A Transparent Generalist Model towards Holistic Medical Vision-Language Understanding [](https://arxiv.org/abs/2510.08668) [](https://huggingface.co/ZJU-AI4H/Hulu-Med) [](https://modelscope.cn/models/Med-Team/Hulu-Med) [](LICENSE) š Paper | š¤ Hulu-Med-7B |š¤ Hulu-Med-14B |š¤ Hulu-Med-32B | š® ModelScope Models | š Demo - [2025-10-08] Hulu-Med models and inference code released! Hulu-Med is a transparent medical vision-language model that unifies understanding across diverse modalities including medical text, 2D/3D images, and videos. Built with a focus on transparency and accessibility, Hulu-Med achieves state-of-the-art performance on 30 medical benchmarks while being trained entirely on public data. - š Holistic Multimodal Understanding: Seamlessly processes medical text, 2D images, 3D volumes, and surgical videos - š Fully Transparent: Complete open-source pipeline including data curation, training code, and model weights - š State-of-the-Art Performance: Outperforms leading open-source models and competes with proprietary systems - ā” Efficient Training: Only 4,000-40,000 GPU hours required for 7B-32B variants - šļø Comprehensive Coverage: Trained on 16.7M samples spanning 12 anatomical systems and 14 imaging modalities - 12 Major Anatomical Systems: Multi-System, Skin/Integumentary, Respiratory, Cellular/Tissue Level, Digestive, Nervous, Cardiovascular, Musculoskeletal, Reproductive, Urinary, Whole Body, Endocrine, Immune/Lymphatic, and Hematologic systems - 14 Medical Imaging Modalities: CT, MRI, X-Ray, Ultrasound, PET, OCT, Endoscopy, Microscopy, Histopathology, Fundus, Dermoscopy, Angiography, Digital Photograph, and Medical Chart - Diverse Downstream Tasks: Medical Dialogue, Anomaly Detection, Prognosis Prediction, Treatment Planning, Surgical Skill Assessment, Education, Medical Report Generation, Surgical Phase Recognition, Medical Computation, and more Performance comparison on medical multimodal benchmarks (For the 'Medical VLM 10B) | | InternVL3-14B | 78.9 | 54.1 | 66.3 | 72.8 | 48.0 | 23.1 | 63.1 | | Qwen2.5V-32B | 68.2 | 54.5 | 71.8 | 71.2 | 41.9 | 25.2 | 59.6 | | InternVL3-38B | 79.8 | 56.6 | 65.4 | 72.7 | 51.0 | 25.2 | 65.2 | | Medical VLMs ( 10B) | | HealthGPT-14B | 75.2 | 56.4 | 65.0 | 66.1 | 56.7 | 24.7 | 49.6 | | HuatuoGPT-V-34B | 74.0 | 56.6 | 61.4 | 69.5 | 44.4 | 22.1 | 51.8 | | Lingshu-32B | 83.4 | 57.9 | 76.7 | 86.7 | 65.5 | 30.9 | - | | Hulu-Med-14B | 85.1 | 68.9 | 76.1 | 86.5 | 64.4 | 30.0 | 54.8 | | Hulu-Med-32B | 84.6 | 69.4 | 81.4 | 85.7 | 67.3 | 34.0 | 60.4 | Performance comparison on medical text benchmarks (bold indicates the best method in each subgroup): | Models | MMLU-Pro | MedXQA | Medbullets | SGPQA | PubMedQA | MedMCQA | MedQA | MMLU-Med | |--------|----------|--------|------------|-------|----------|---------|-------|----------| | Proprietary Models | | GPT-4.1 | 78.0 | 30.9 | 77.0 | 49.9 | 75.6 | 77.7 | 89.1 | 89.6 | | o3-mini | 78.1 | 35.4 | 83.7 | 50.1 | 73.6 | 60.6 | 74.5 | 87.0 | | Claude Sonnet 4 | 79.5 | 33.6 | 80.2 | 56.3 | 78.6 | 79.3 | 92.1 | 91.3 | | Gemini-2.5-Flash | 70.0 | 35.6 | 77.6 | 53.3 | 73.8 | 73.6 | 91.2 | 84.2 | | General VLMs ( 10B) | | Qwen2.5VL-32B | 66.5 | 15.6 | 54.2 | 37.6 | 68.4 | 63.0 | 71.6 | 83.2 | | InternVL3-14B | 65.4 | 14.1 | 49.5 | 37.9 | 77.2 | 62.0 | 70.1 | 81.7 | | InternVL3-38B | 72.1 | 16.0 | 54.6 | 42.5 | 73.2 | 64.9 | 73.5 | 83.8 | | Medical VLMs ( 10B) | | HealthGPT-14B | 63.4 | 11.3 | 39.8 | 25.7 | 68.0 | 63.4 | 66.2 | 80.2 | | Lingshu-32B | 70.2 | 22.7 | 65.4 | 41.1 | 77.8 | 66.1 | 74.7 | 84.7 | | HuatuoGPT-V-34B | 51.8 | 11.4 | 42.7 | 26.5 | 72.2 | 54.7 | 58.8 | 74.7 | | Hulu-Med-14B | 68.0 | 23.2 | 68.5 | 37.7 | 79.8 | 70.4 | 78.1 | 83.3 | | Hulu-Med-32B | 72.9 | 24.2 | 68.8 | 41.8 | 80.8 | 72.8 | 80.4 | 85.6 | We provide three model variants with different parameter scales: | Model | Parameters | LLM Base | Training Cost | HuggingFace | ModelScope | |-------|-----------|----------|---------------|-------------|------------| | Hulu-Med-7B | 7B | Qwen2.5-7B | ~4,000 GPU hours | š¤ Link | š® Link | | Hulu-Med-14B | 14B | Qwen3-14B | ~8,000 GPU hours | š¤ Link | š® Link | | Hulu-Med-32B | 32B | Qwen2.5-32B | ~40,000 GPU hours | š¤ Link | š® Link | Our training data consists of 16.7M samples across four categories: - Medical Multimodal Data (9M samples): Covering 14 imaging modalities - Medical Text Data (4.9M samples): Clinical notes, literature, QA pairs - General Multimodal Data (1.3M samples): Enhancing generalization - General Text Data (1.5M samples): Improving reasoning capabilities 1. Vision Encoder: SigLIP-based encoder with 2D RoPE for unified 2D/3D/video processing 2. Multimodal Projector: Projects visual tokens into language model space 3. LLM Decoder: Qwen-based decoder for generating responses 4. Medical-Aware Token Reduction: Efficient processing with ~55% token reduction - ā Visual Question Answering (2D/3D/Video) - ā Medical Report Generation - ā Disease Diagnosis - ā Anatomical Understanding - ā Surgical Phase Recognition - ā Clinical Dialogue - ā Medical Text Reasoning - ā Multilingual Medical QA - ā Rare Disease Diagnosis - More If you find Hulu-Med useful in your research, please cite: This project is released under the Apache 2.0 License.