| The number of visually impaired people in the world is huge,and traveling outdoor is a maj or problem they face.To help with them by visually impaired assistance devices,the first challenge is the environmental perception.3D-imaging technologies have attracted much attention,since they can obtain three-dimensional spatial information,providing more information than planar images.Among them,the stereo vision technology is more suitable for visually impaired assistance devices,since its better adapatablity indoor and outdoor than structured light method and time-of-flight method.However,the stereo matching algorithm has strict requirements on the poses between the cameras,while the working conditions of the visually impaired assistance devices can easily cause the cameras to be misaligned.Thus,frequent calibration is unavoidable,which is extremely inconvenient.In addition,there are many problems with traditional binocular stereo vision,including the high computing load of the host computer,the contradiction between close-range blind region and large-scale depth output,and insufficient adaptability to dark and low-texture environments.Based on these challenges,this paper proposes an unconstrained self-calibration method that can achieve comparable accuracy and reliability as the template-based calibration method without special scenes and users' special cooperation.An embedded stereo vision system is designed.The functions of image acquisition,filtering,interpolation,correction,and stereo matching are all integrated into the FPGA,realizing low-power,low-latency real-time disparity map output.At the same time,the design of the multi-baseline stereo system is adopted to solve the contradiction between the close-range blind region and the large-scale depth output.Finally,the environmental adaptability of the stereo vision sensor is improved through the combination of the RGB-IR image sensor and the laser speckle projector. |