Dataset for paper "Teach Multimodal LLMs to Comprehend Electrocardiographic Images".
š©āāļø ECGInstruct(Training): https://huggingface.co/datasets/PULSE-ECG/ECGInstruct
āļø ECGBench(Testing): https://huggingface.co/datasets/PULSE-ECG/ECGBench
We introduce PULSE-7B, a multimodal large language model (MLLM) specifically designed for ECG image interpretation. Leveraging the comprehensive ECGInstruct dataset, which contains over one million instruction-tuning samples, PULSE-7B is tailored to handle a wide range of ECG-related tasks drawn from diverse data sources. While traditional ECG interpretation methods are often constrained by their reliance on raw physiological signals and limited to specific cardiac conditions, PULSE-7B addresses these limitations by enabling robust interpretation of both printed and digital ECG images, making it especially valuable in resource-limited settings where access to raw signals may be restricted. In conjunction with the introduction of ECGBench, a benchmark that includes four key tasks spanning nine datasets, our experiments demonstrate that PULSE-7B establishes new state-of-the-art performance, surpassing general MLLMs with an average accuracy improvement of 15% to 30%. This model showcases the potential to significantly advance ECG image interpretation, providing a more versatile and accurate tool for clinical practice.
Citation If you find this work helpful, please cite our paper: