With the rapid development of economic society,the change in human eating habits has led to poor oral health.In clinical diagnosis,it is difficult for dentists to fully understand the actual situation inside the patient’s mouth through traditional methods.With the development of computer technology and digital oral medicine,Cone Beam Computed Tomography(CBCT)has been used in dental diagnosis as a medical imaging technology.However,manual tooth segmentation in CBCT images is time-consuming,laborious,and requires the high proficiency of doctors.Therefore,an automatic tooth segmentation method of CBCT is urgently needed to assist dentists in diagnosing dental diseases.It is challenging to accurately segment the tooth from CBCT images due to the similar density of the alveolar bone around the tooth and indistinguishable tooth and the blurred edge between adjacent teeth.This paper proposes a two-stage dental segmentation model based on deep learning.In the first stage of the two-stage learning,the tooth edge prior information is learned,and in the second stage,the tooth is segmented with the aid of the edge previous information.The two steps of the network use U-Net as the backbone network.The edge attention module is added to the first stage network to help learn important feature information by adding the channel attention mechanism.The multi-scale feature information is fused to obtain practical edge features.Feature fusion units were added to the second stage network to integrate decoding features learned from tooth and edge images,further enhancing the network’s ability to segment adjacent teeth and distinguish teeth from the alveolar bone.This paper uses the CBCT tooth data set to conduct experiments on several semantic segmentation networks.The results show that compared with the existing tooth segmentation algorithms,the proposed algorithm can segment the tooth edge more completely,and the segmentation effect is more accurate.In addition,the ablation experiments designed in this paper demonstrate the effectiveness of the edge attention module,the depth supervision branch,and the feature fusion unit. |