| Kinect is a depth camera from Microsoft, it’s mainly used in somatosensory games, but it’s also widely used in computer graphics of 3D dense reconstruction because of its high quality depth map and frame rate. Especially after 2011, when Newcombe R A released their work of KinectFusion, which is aimed at dense scene 3D reconstruction in real-time, many applications have been developed based on Kinect.The main idea of KinectFusion is that it maintains a Volume in GPU memory, and divides the Volume into N*N*N voxels, in which stores two value:Truncated signed distance field(TSDF) and Weight. For those voxels with ZERO TSDF and non-ZERO Weight, they are on the surface of the objects to be constructed. So we can take the zero isosurface as the object surface, and perform a MarchingCube to generate dense mesh.Take a look at how KinectFusion implements, we can see that the object being reconstructed is the envelope of discrete voxels, so the final result lost the precision of original data, and the precision depends on the size of voxel. But if we use small voxels (With a larger N) it requires at least 4GB graphics memory, it’s not possible with the computer technologies today. This paper provides an offline reconstruction method based on KinectFusion, which is aimed at reserve all the precision and details, and generate a globally optimized point cloud. With the advantage of using Kintinuous, we can also build infinite scene theoretically. First, we use Kintinuous to track the camera trajectory, to get a rough initial alignments of the point clouds, and dump the original depth map from the depth camera (An imaginary high precision camera, but we use Kinect 1 in this paper), and then restore the vertex positions and norms with the initial alignments provided by the camera tracking. After roughly alignment, we perform a Point-to-Plane ICP algorithm to get all the correspondence points. And finally, we use the global optimization method to minimize the total alignment error. |