Font Size: a A A

Study On CT Sparse Reconstruction Method Based On Transformer

Posted on:2024-08-03Degree:MasterType:Thesis
Country:ChinaCandidate:M M ChenFull Text:PDF
GTID:2544307115964099Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The emergence of computed tomography(CT)technology has greatly promoted the progress of medical imaging diagnosis,playing an increasingly important role in precision diagnosis and treatment.However,X-ray radiation may endanger human health,so low-dose CT has become a research hotspot.CT sparse reconstruction refers to the projection reconstruction image acquired from the sparse angle,which is an effective low-dose CT implementation method.However,the sparsely reconstructed images represented by the Filtered Back Project(FBP)method contain severe stripe artifacts,which affect subsequent diagnosis.Iterative method and deep learning method are two types of high-precision sparse reconstruction methods,and this paper focuses on sparse reconstruction methods based on deep learning.At present,the deep learning method represented by Convolutional Neural Network(CNN)is a classical strip artifact suppression method,but CNN mainly models local information in association,and its ability to utilize global information is weak.Transformers,which have been successfully used in natural language processing and computer vision,can compensate for this shortcoming,modeling long-range dependencies on a global scale.This paper aims to study the effective coupling of the two,and focus on exploring the role of attention mechanism to improve the accuracy of sparse reconstruction of CT images.The main work of this article is as follows:(1)A Channel Attention Fused,U-shaped Transformer(CA-Uformer)is proposed for high-precision sparse reconstruction of CT images.A dual-attention mechanism is designed to couple channel attention with spatial self-attention in Transformer.By replacing the forward feedback network with a deep separable convolution,the organic coupling of CNN and Transformer is realized,and the local information modeling ability of the former and the global information association ability of the latter are integrated.5000 TCIA datasets are selected as the training set,300 as the validation set,and 300 as the test set.Compared with four classical pure convolutional networks and one pure Transformer model,this network effectively suppresses the stripe artifacts and retains more image structure information.(2)A Pyramid Attention Fused,CA-Uformer(PA-CA-Uformer)network is proposed to further improve the accuracy of sparse reconstruction of CT images.On the basis of the previous work,the feature pyramid attention mechanism is introduced,the implementation mode of the multi-head attention module is improved,and the dataset is expanded to10,000 training sets,600 validation sets,and 300 test sets,which further improves the network’s ability to suppress bar artifacts and achieve high-precision sparse reconstruction compared to the five comparison algorithms and CA-Uformer used in the previous chapter.
Keywords/Search Tags:computed tomography techniques, sparse reconstruction, convolutional neural networks, transformer, attention mechanism
PDF Full Text Request
Related items