| High-resolution medical images can provide detailed information about the internal structures of the human body,which is essential for clinical diagnosis and precise quantitative image analysis,making it a cornerstone of modern medical diagnostics.However,due to the limitations of imaging equipment and the environment,medical images obtained often suffer from various types of noise and artifacts,which can degrade image quality and adversely affect clinical diagnosis.To address this issue,this study applies image super-resolution(SR)reconstruction techniques to lung CT images,and investigates current advanced image SR algorithms and their limitations.Specifically,the study focuses on the following three aspects:In response to the problem of convolutional neural networks(CNNs)treating all features equally,which can result in image reconstruction with a certain degree of edge structure blurring or incomplete structure,a medical image super-resolution model based on global attention(GASR)is proposed in this study.The network first proposes a channel contrast perception attention module(CCA)to enhance the contrast of feature maps and highlight the edge structure features.Then,a layer self-attention module(LSA)is proposed to capture long-range dependencies between hierarchical features,and the self-attention mechanism’s cross-window information interaction ability is enhanced through a cyclic shift mechanism.Finally,an enhanced spatial attention module(ESA)is introduced at the end of the network to focus on valuable areas in the spatial dimension,thereby extracting more representative high-frequency features.Experimental results show that the GASR network improves the PSNR index compared to the RCAN network by 0.39-0.83 d B in the ×2,×3,and ×4 magnification tasks,while the network parameter volume Params is compressed by 37% and the floating-point operation volume FLOPs is compressed by 39%.In response to the issue of CNN being limited by the local processing principle of convolution and unable to effectively capture global feature dependencies in medical image super-resolution(SR),a multi-scale overlapping cross-window network-based medical image SR model(MOCwin SR)is proposed.The model uses Transformer to capture long-distance feature dependencies in the image and expands the network’s receptive field and feature extraction capabilities by combining windows of different sizes.The network groups features by channel and divides them into different window sizes to construct grouped multi-scale attention blocks(GMAB)and expand the network’s receptive field.The overlap cross-attention module(OCA)is introduced in each group to calculate the self-similarity weight of the image,thereby enhancing the information interaction capabilities across windows in the network.Experimental results show that MOCwin SR outperforms Swin IR by 0.06-0.09 d B and GASR by 0.13-0.26 d B in ×2,×3,and ×4 upscaling tasks.In response to the problem of existing networks producing overly smooth and blurry reconstructed images,we propose a medical image super-resolution model based on a Transformer adversarial network(MOCwin GAN).The network uses the proposed MOCwin SR network as the generator to address the problem of the single structure of the existing GAN network generator.Furthermore,to enhance the discriminator’s ability to discriminate local texture details,we propose a relative discriminator based on image blocks to discriminate local regions in block form and feed back the local texture details to the generator to help it better reconstruct fine details in the image.Experimental results show that the MOCwin GAN network improves the PSNR indicator by an average of 0.73 d B compared to the SRGAN network and by an average of 0.9 d B compared to the ESRGAN network in the ×4 upscaling task.Moreover,it can produce more realistic and natural visual effects. |