baidu
ERNIE-4.5-VL-28B-A3B-PT
ERNIE-4.5-21B-A3B-PT
ERNIE-4.5-21B-A3B-Thinking
Over the past three months, we have continued to scale the thinking capability of ERNIE-4.5-21B-A3B, improving both the quality and depth of reasoning, thereby advancing the competitiveness of ERNIE lightweight models in complex reasoning tasks. We are pleased to introduce ERNIE-4.5-21B-A3B-Thinking, featuring the following key enhancements: Significantly improved performance on reasoning tasks, including logical reasoning, mathematics, science, coding, text generation, and academic benchmarks that typically require human expertise. Efficient tool usage capabilities. Enhanced 128K long-context understanding capabilities. > [!NOTE] > Note: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks. ERNIE-4.5-21B-A3B-Thinking is a text MoE post-trained model, with 21B total parameters and 3B activated parameters for each token. The following are the model configuration details: |Key|Value| |-|-| |Modality|Text| |Training Stage|Posttraining| |Params(Total / Activated)|21B / 3B| |Layers|28| |Heads(Q/KV)|20 / 4| |Text Experts(Total / Activated)|64 / 6| |Shared Experts|2| |Context Length|131072| > [!NOTE] > To align with the wider community, this model releases Transformer-style weights. Both PyTorch and PaddlePaddle ecosystem tools, such as vLLM, transformers, and FastDeploy, are expected to be able to load and run this model. Quickly deploy services using FastDeploy as shown below. For more detailed usage, refer to the FastDeploy GitHub Repository. Note: 80GB x 1 GPU resources are required. Deploying this model requires FastDeploy version 2.2. The ERNIE-4.5-21B-A3B-Thinking model supports function call. The `reasoning-parser` and `tool-call-parser` for vLLM Ernie need install vllm main branch Note: You'll need the`transformers`library (version 4.54.0 or newer) installed to use this model. The following contains a code snippet illustrating how to use the model generate content based on given inputs. The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved. If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report:
Qianfan-OCR
ERNIE-4.5-0.3B-PT
ERNIE-4.5-21B-A3B-Base-PT
ERNIE-4.5-0.3B-Base-PT
ERNIE-4.5-VL-28B-A3B-Thinking
ERNIE-Image
Qianfan-VL-8B
ERNIE-4.5-300B-A47B-PT
> [!NOTE] > Note: "-Paddle" models use PaddlePaddle weights, while "-PT" models use Transformer-style PyTorch weights. The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations: 1. Multimodal Heterogeneous MoE Pre-Training: Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text understanding and generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a heterogeneous MoE structure, incorporated modality-isolated routing, and employed router orthogonal loss and multimodal token-balanced loss. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training. 2. Scaling-Efficient Infrastructure: We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and finegrained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose multi-expert parallel collaboration method and convolutional code quantization algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on PaddlePaddle, ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms. 3. Modality-Specific Post-Training: To meet the diverse requirements of real-world applications, we fine-tuned variants of the pre-trained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visuallanguage understanding and supports both thinking and non-thinking modes. Each model employed a combination of Supervised Fine-tuning (SFT), Direct Preference Optimization (DPO) or a modified reinforcement learning method named Unified Preference Optimization (UPO) for post-training. ERNIE-4.5-300B-A47B is a text MoE Post-trained model, with 300B total parameters and 47B activated parameters for each token. The following are the model configuration details: |Key|Value| |-|-| |Modality|Text| |Training Stage|Pretraining| |Params(Total / Activated)|300B / 47B| |Layers|54| |Heads(Q/KV)|64 / 8| |Text Experts(Total / Activated)|64 / 8| |Vision Experts(Total / Activated)|64 / 8| |Context Length|131072| Note: Before using the model, please ensure you have the `transformers` library installed (upcoming version 4.54.0 or the latest version) The following contains a code snippet illustrating how to use the model generate content based on given inputs. To achieve optimal performance, we suggest using `Temperature=0.8`, `TopP=0.8`. For Web Search, {references}, {date}, and {question} are arguments. {question} is the user’s question {date} is the current time, and the recommended format is “YYYY-MM-DD HH:MM:SS, Day of the Week, Beijing/China.” {references} is the references, and the recommended format is: The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved. If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report:
ERNIE-4.5-0.3B-Paddle
ERNIE-Image-Turbo
ERNIE-4.5-VL-424B-A47B-PT
ERNIE-4.5-0.3B-Base-Paddle
Qianfan-VL-3B
Qianfan-VL-70B
ERNIE-4.5-VL-424B-A47B-Base-PT
ERNIE-4.5-300B-A47B-Base-PT
ERNIE-4.5-21B-A3B-Paddle
ERNIE-4.5-21B-A3B-Base-Paddle
ERNIE-4.5-300B-A47B-Base-Paddle
ERNIE-4.5-300B-A47B-Paddle
> [!NOTE] > Note: "-Paddle" models use PaddlePaddle weights, while "-PT" models use Transformer-style PyTorch weights. The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations: 1. Multimodal Heterogeneous MoE Pre-Training: Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text understanding and generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a heterogeneous MoE structure, incorporated modality-isolated routing, and employed router orthogonal loss and multimodal token-balanced loss. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training. 2. Scaling-Efficient Infrastructure: We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and finegrained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose multi-expert parallel collaboration method and convolutional code quantization algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on PaddlePaddle, ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms. 3. Modality-Specific Post-Training: To meet the diverse requirements of real-world applications, we fine-tuned variants of the pre-trained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visuallanguage understanding and supports both thinking and non-thinking modes. Each model employed a combination of Supervised Fine-tuning (SFT), Direct Preference Optimization (DPO) or a modified reinforcement learning method named Unified Preference Optimization (UPO) for post-training. ERNIE-4.5-300B-A47B is a text MoE Post-trained model, with 300B total parameters and 47B activated parameters for each token. The following are the model configuration details: |Key|Value| |-|-| |Modality|Text| |Training Stage|Pretraining| |Params(Total / Activated)|300B / 47B| |Layers|54| |Heads(Q/KV)|64 / 8| |Text Experts(Total / Activated)|64 / 8| |Vision Experts(Total / Activated)|64 / 8| |Context Length|131072| ERNIEKit is a training toolkit based on PaddlePaddle, specifically designed for the ERNIE series of open-source large models. It provides comprehensive support for scenarios such as instruction fine-tuning (SFT, LoRA) and alignment training (DPO), ensuring optimal performance. For more detailed examples, including SFT with LoRA, multi-GPU configurations, and advanced scripts, please refer to the examples folder within the ERNIEKit repository. Service deployment can be quickly completed using FastDeploy in the following command. For more detailed usage instructions, please refer to the FastDeploy Repository. Note: To deploy on a configuration with 4 GPUs each having at least 80G of memory, specify . If you specify , then resources for 8 GPUs are required. To deploy the sparse attention version to speed up long context using FastDeploy, you can run the following command. For more details about sparse attention, please refer to the PLAS Attention. To deploy the W4A8C8 quantized version using FastDeploy, you can run the following command. To deploy the WINT2 quantized version using FastDeploy on a single 141G GPU, you can run the following command. The following contains a code snippet illustrating how to use ERNIE-4.5-300B-A47B-FP8 generate content based on given inputs. To achieve optimal performance, we suggest using `Temperature=0.8`, `TopP=0.8`. For Web Search, {references}, {date}, and {question} are arguments. {question} is the user’s question {date} is the current time, and the recommended format is “YYYY-MM-DD HH:MM:SS, Day of the Week, Beijing/China.” {references} is the references, and the recommended format is: The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved. If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report: