Font Size: a A A

Research On Deep Space-Based Image Processing Algorithms And Their Embedded Implementation

Posted on:2020-05-01Degree:MasterType:Thesis
Country:ChinaCandidate:W Z ShiFull Text:PDF
GTID:2392330590972323Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
Since the launch of the first artificial satellite in 1957,human exploration of space has never stopped,and it has gradually developed into the deep space.The navigation control of spacecraft is a key technology in deep space exploration.As the exploration distance becomes more and more distant,the limitations of traditional ground station control become more and more obvious.Therefore,spacecraft autonomous navigation has become one of the important technologies for deep space exploration.Optical autonomous navigation uses high-definition optical sensors to acquire deep-space pictures,and through image recognition technology,locates the target and its position and posture,thereby realizing the control of the detector attitude and flight path.The spacecraft has a fast flying speed in space,and the critical attitude adjustment window is short.It requires the spaceborne optical autonomous navigation device to acquire and process images in real time,and obtain accurate spacecraft positioning and navigation information in time.Centroid detection and edge detection are the key algorithms for locating celestial targets in optical autonomous navigation systems.Traditional centroid extraction and edge detection algorithms are complex and computationally intensive,but limited by volume and power consumption,onboard computer computing power is extremely limited,it is difficult to meet the requirements of autonomous navigation for real-time image processing.On the other hand,the surface of the target star in deep space exploration is covered with craters and atmosphere,while the traditional edge detection algorithm is only suitable for extracting edge points in the image,unable to identify the crater and atmosphere on the surface of the target star,thus causing a large number of pseudo edge points,which leads to greater difficulty and error in centroid extraction.Aiming at the real-time requirements of image processing in optical autonomous navigation,this paper studies and proposes a centroid extraction algorithm and an optimized Canny edge detection algorithm,which greatly simplifies the calculation of the algorithm under the premise of ensuring the accuracy of target centroid extraction.At the same time,for the above two algorithms,this study designed an FPGA-based pipeline processing architecture,which can realize real-time processing of the algorithm.Based on the above research,this research builds a complete optical autonomous navigation image processing system on the Xilinx V4 development platform.After testing,the highest FPGA running frequency is up to 120 Mhz.The time required to process a 5120*3840 image is 18.75 ms,which satisfies requirements for real-time image processing in optical autonomous navigation system.
Keywords/Search Tags:FPGA, Optical autonomous navigation, Canny edge detection, Centroid detection, Image processing
PDF Full Text Request
Related items