Font Size: a A A

Study On CT-PET Image Fusion With Wavelet Transform In Precise And Accurate Radiotherapy

Posted on:2012-11-28Degree:MasterType:Thesis
Country:ChinaCandidate:L X LiuFull Text:PDF
GTID:2214330374454161Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
Radiotherapy, surgery and chemotherapy are three most important cancer treatment means, and there are estimated to be more than 60%~70% cancer patients who need radiation therapy, including radiotherapy alone, preoperative or postoperative radiotherapy and radiotherapy+chemotherapy. At present, Precise and Accurate Radiotherapy (PAR) is the mainstream of modern radiation therapy. It requires to get accurate positioning, precise plan, precise setup, and precise exposure, what's more, it must ensure that the high-dose distribution of three-dimensional spatial should consistent with the shape of the target, dose intensity inside the target area can be adjustable. Now, there are four treatments for PAR:stereotactic radiotherapy, three dimensional conformal radiation therapy, intensity modulated radiation therapy and image guided radiotherapy. However, due to the technical complexity, the advantage of PAR is far from played out in clinical practice. There still have some problems to solve, for example, the exact positioning of the target.Studies show that the target location is one of the most important factors of affect curative effect. Currently the main reference image for target location is CT image, it has a higher spatial resolution, more sensitive to high density tissue, and can be good for lesions positioning, but can not clearly show the boundary of soft tissue tumors in particular the infiltration tissue, which can cause different results when doctors draw the outline of the target, such as some leakage or mistaken, that directly affect the subsequent course of treatment.With the development of medical imaging technology, there are more and more medical imaging equipment, which can provide different modal medical image for clinical diagnosis and treatment. According to the different information in their own, these medical images can be divided into two categories:anatomic image (e.g.: X-ray, ultrasound, CT, MRI) and functional image (e.g.:SPECT, PET). Among them, the anatomical imaging have high-resolution spatial, show a clear organizational structure of focus, provide precise tumor size, location and other information, can clearly show the relationship between the tumor and surrounding tissue, but lack of metabolic function information of the disease, can not accurately determine their boundaries. While, functional imaging is obtained by means of nuclear medicine, very sensitive to early metabolic abnormalities or function abnormalities lesions, and can detect injury among the complex anatomical structures or locate the lesion that has not happened before anatomical structure injury, but its low spatial resolution limits to diagnose tiny damage, hard to meet the need for accurately judging the relationship between damage and surrounding tissues structure. It follows that the two type's medical images can provide complementary medical information in the same area. If we can combine the complementary information, express as a whole, we can provide more comprehensive information for medical diagnosis, function and structure of the human body research. This integrated process is medical image information fusion, short for medical image fusion.Medical image fusion was developed on the basis of information fusion and image fusion. It is the technique in which useful information from two or more registered medical images is integrated into a new medical image, which can make clinical diagnosis and treatment more accurate and perfect. Based on the information processing abstract degree, image fusion technology can be divided into three levels:pixel-level, feature-level and decision-level, among them, pixel-level fusion is widely applied in medical image fusion processing, for it fuse the pixel data of the source image directly and can retain more scenes of the original information, has a high precision.At present, the pixel level image fusion method can fall into two categories: spatial domain and transform domain. Fusion scheme based on space domain is a common method by using gray weighted, larger or smaller means, its calculation course is very simple and feasible, but the fusion effect of the general, application fields are limited. Another approach based on transform domain, such as multi-resolution pyramid method, wavelet transform method, includes the following steps: firstly the original images are decomposed by space transformation, obtained images'decomposition coefficients, and then the fused decomposition coefficients are selected by different rules, last, a fused image is obtained from the fused coefficients by inverse transform. Of which, wavelet transform have found wide applications to image fusion due to the property of time-frequency localization and multi-scale analysis, that can fully extract the complementary and redundant information from the source images, also can highlight and strengthen characteristics, details of interest region. This paper has also adopted this approach.Mallat algorithm is the most classical algorithm in wavelet analysis field, which equivalent to FFT in Fourier transform position, and laid a foundation for apply discrete wavelet transform in image processing, image coding and other fields. Via discrete wavelet decomposition, the image was broken down into two parts:one is a low-frequency sub-band and the others are three high frequency sub-band, of which, the low frequency sub-band reflects the approximate component image, represents the basic information of the original image, while the high-frequency sub-band reflects the detail of the image:the horizontal, vertical and diagonal, which corresponding to the edge, line, area, boundaries and other information of the image. Image fusion based on wavelet transform is the essence of using different fusion rules for the image sub-band to get fusion image.Researches indicate that fusion rules and fusion operators are the core of wavelet transform image fusion, which directly influence the speed and quality of image fusion. Among the wavelet image fusion methods, most studies are to discuss how to choose appropriate fusion operators for integration the high-frequency sub-band, to low-frequency sub-band often use pixel weighted method or pixel larger. The paper, however, is focused on the low-frequency sub-band fusion. Passes through the synthetically comparison and analysis of three types' fusion rules advantages and disadvantages:pixels, window and regional, a fusion rules was proposed:low-frequency based on window neighborhood entropy larger and high-frequency based on window standard deviation value larger. This is because: the approximate sub-band image is the source image smoothing, and it inherits part properties of the source image, such as the gray value and texture information, using neighborhood entropy measure can be very well reflected in the characteristics of sub-band images. Its operational steps are as follows:calculating the entropy of a fixed size window (here take 3×3), and assign it to the center pixel; then move the neighborhood window in the whole image, get each pixel's neighborhood entropy, that is, get a neighborhood entropy image; last, select larger neighborhood entropy and its corresponding image gray value as the fusion image gray value. For the high-frequency sub-band images, as they extract the sensory information with the edge related, their coefficient in the vicinity of zero value, here with the standard deviation value, for the larger standard deviation value corresponding to the rapid change, such as image edges and regional boundaries or other information.What's more, the wavelet transform image fusion method is realized through MATLAB software, and full use of its wavelet toolbox, which provides many different factions wavelet base function. For the same image, use different wavelet basis function decomposition, the effect can be different, so we need to consider the selection of wavelet basis function. Second, the more layers of wavelet transform, the more details of the integration, but not the more layers the better. This is because the more the image decomposition level, the more the sub-band produce, which can lead to increase inter-stage filter and signal shift, while the spatial resolution of the low-frequency sub-band reduces, become increasingly blurred, block effect caused by boundary extension become more and more obvious, boundary distortion is also growing. Therefore, we should limit decomposition level in wavelet transform based on the different types of images and the effect of the fused image. In this paper, on the basis of relevant literature, using MATLAB's own "wmaxlev" function to calculate the maximum decomposition level of each wavelet base, then do wavelet decomposition to source image and extraction low frequency sub-band image, by comparing Low-frequency sub-band and the original image entropy to determine the best image decomposition level, finally, bringing the evaluation index to determine the best wavelet basis function through the advantages and disadvantages of the fused images which obtain by different wavelet function.In summary, in this paper, we discuss the different levels of medical image fusion method. Comprehensive comparative advantages and disadvantages of each method, a new image fusion method has built based on the wavelet transform, the low-frequency fusion rules based on the window neighborhood entropy and the high-frequency fusion rules based on the window standard deviation value. Then, by discussing the characteristics of different wavelet basis functions, a method of selecting wavelet basis function was put forward:first to determine the best decomposition level of each corresponding wavelet function, and then determine the optimal wavelet basis function by evaluation the fused image. Here, the experimental test data used PET images and CT images, and fusion evaluation index based on statistical and informative approach was selected to assessing the merits of the fused image.The results show that the wavelet transform algorithm is an effective method for image fusion. Using the proposed wavelet fusion operator and the wavelet method to determine the best parameters can get high performance fused image. In the final, we also discuss some problems which haven't been completed; in addition, we prospect some work which we'll do in the future.
Keywords/Search Tags:PAR, Image fusion, Wavelet transform, Fusion rules, Wavelet base, Decomposition level
PDF Full Text Request
Related items