| Medical image segmentation is a fundamental and critical step for both current diagnosis and evaluation of disease evolution in many clinical approaches.Semi-supervised learning has been widely applied to medical image segmentation tasks but the existing semisupervised methods treat the labeled images and unlabeled images separately and ignore the explicit connection between them,thus disregard essential shared information and hinder further performance improvements.In this study,we propose to employ the transformer learning method into a multi-task mean teacher model for semi-supervised segmentation through leveraging un-annotated data and scrutinizing the learning of multiple shadows information concurrently.In order to make full use of unlabeled data and analyzing the learning of multiple information of shadows concurrently by integrating multi-task consistency learning from up-to-date predictions for self-ensembling and multitask consistency learning from task-level regularization to exploit geometric shape information.To address the scarcity of labeled data,a unique segmentation framework emerges,leveraging mainly unlabeled data and requiring only a few labeled images.This framework,powered by a transformer and employing a careful selected propagation strategy,aims to ease the shortage and enhance the learning ability of the model.Termed MT-TransInfNet,it introduces an innovative approach that surpasses many cutting-edge segmentation models,propelling the field’s performance to new heights.Through rigorous experimentation on COVID-SemiSeg and publicly available CT volumes,the proposed framework demonstrates its superiority,outperforming existing models and making significant advancements in the state-of-the-art of segmentation techniques. |