Font Size: a A A

Research And Implementation Of Optical Flow Estimation And Motion Segmentation Technology Based On Asynchronous Event Stream And Traditional Image Fusion Through Deep Learning Network

Posted on:2023-09-23Degree:MasterType:Thesis
Country:ChinaCandidate:C LiuFull Text:PDF
GTID:2558307169978589Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of artificial intelligence,computer vision algorithms based on deep learning have been fully developed and applied.Common computer vision research areas include target recognition,target detection,target tracking,obstacle avoidance and other popular research areas.Optical flow estimation and motion segmentation,as the underlying research content of computer vision,provide basic and necessary kinematic information for the above fields,and have also attracted the attention of a wide range of scholars.Traditional optical flow estimation and motion segmentation algorithms rely on image sequences and videos captured by traditional cameras for analysis and calculation.Due to the influence of complex scenes such as illumination changes and motion speed,the imaging quality will be seriously degraded in some scenes,which will lead to the degradation of the performance of the algorithm,and the robustness needs to be improved.With the development of neuromorphic imaging technology and silicon retina technology,the emergence of a new type of dynamic vision sensor(ie event camera)provides new ideas and opportunities for research in the field of computer vision.Event cameras have the characteristics of high dynamic range,asynchrony,and low latency,and are especially suitable for dynamic and complex scenes,complementing the advantages of traditional cameras.However,traditional optical flow estimation and motion segmentation methods cannot be directly applied to event camera data,so special algorithms need to be designed.Therefore,this paper focuses on the problem of optical flow estimation and motion segmentation based on event flow.First,in order to improve the robustness of optical flow estimation in different scenarios,an adaptive event flow and traditional image fusion optical flow estimation network based on local motion features and multi-channel relationships is proposed.To solve the complex problem of supervised learning data calibration,a self-supervised motion segmentation method based on the difference of boundary optical flow is proposed to realize self-supervised learning and improve the efficiency and accuracy of segmentation calculation;Finally,combining the above two research results,based on the DV event camera development platform and the QT framework,a prototype system for optical flow estimation and motion segmentation based on the DAVIS 346 B event camera and public datasets is designed and implemented.To sum up,the work of this paper mainly includes the following three aspects:Fisrt,aiming at how to make full use of the characteristics of asynchronous event streams and traditional images,an adaptive event stream and traditional image dynamic fusion method based on local motion features and multi-channel relationships is proposed.The optical flow estimation method of multi-pass feature fusion effectively improves the robustness of the optical flow estimation algorithm in different scenarios.Experimental verifications show that the above methods have improved performance in terms of average accuracy compared to the baseline methods in a variety of different scenarios,and have better robustness.Second,aiming at the problem of how to realize self-supervised motion segmentation using only optical flow as input,a self-supervised motion segmentation method based on the difference characteristics of optical flow at the segmentation boundary is proposed.The core of the method is the boundary difference loss function,which can effectively measure the difference between the calculated segmentation boundary and the real boundary,and reduce the difference loss through back propagation,making the boundary division more accurate.Through experimental analysis,the segmentation effect on the FBMS59 dataset is better than the baseline method.Since the moving objects contained in the FBMS59 dataset are faster,the optical flow difference characteristics of their boundaries are also more obvious,which proves the effectiveness of the method,and shows that the proposed method is more suitable for high-speed moving scenes,and also highlights the event camera.specialty.Then,based on the above research work,this paper designs and implements a prototype system for optical flow estimation and motion segmentation based on asynchronous event flow and traditional image fusion.Based on the DV platform and the QT framework,a prototype system for optical flow estimation and motion segmentation is built,which can run on the DAVIS 346 event camera in real time and offline on the public dataset.Since the system performance test needs to provide the true value,it mainly relies on the public data set for experimental verification and performance test.
Keywords/Search Tags:Event Camera, Optical Flow Estimation, Deeplearning, Motion Segmentation
PDF Full Text Request
Related items