Luffy503
UniBiomed
UniBiomed: A Universal Foundation Model for Grounded Biomedical Image Interpretation We introduce UniBiomed, the first universal foundation model for grounded biomedical image interpretation, which is capable of generating accurate diagnostic findings and simultaneously segmenting the corresponding biomedical targets. UniBiomed is based on a novel integration of Multi-modal Large Language Model (MLLM) and Segment Anything Model (SAM), which can effectively unify diverse biomedical tasks in universal training for advancing grounded interpretation. We will consistently update more powerful versions of models in this repo. If you find this repo useful for your research, please consider citing the paper as follows:
VoCo
This work presents VoCo, a new method for Large-Scale 3D Medical Image Pre-training. We release a new benchmark, including 160K volumes (42M slices) for pre-training, 31M~1.2B params of pre-trained models, various pre-training recipes, and 50+ downstream tasks implementation. Linshan Wu, Jiaxin Zhuang, and Hao Chen . "Large-Scale 3D Medical Image Pre-training with Geometric Context Priors". CVPR 2024 Extension. Code link: https://github.com/Luffy03/Large-Scale-Medical