Outdoor scene understanding plays a key role for Unmanned Ground Vehicles (UGVs) to navigate in complex urban environments. This paper focuses on the method of scene understanding for UGVs with3D laser scanner. The2D Optimal Bearing Angle (OBA) model is developed to simplify the computation of point classification. The model can overcome gray mutation, viewpoint selection and poor details expression when compared with traditional2D model, and is applied as the basic model in our point classification algorithm.Super-pixel algorithm in OBA image is adopted to implement fast scene segmentation. A novel scene understanding algorithm based on Gentle-AdaBoost algorithm and re-classification strategy is proposed to implement fast3D point classification. The Gentle-AdaBoost algorithm can strengthen the weak features of spatial geometry and texture of super-pixel patches into strong feature. All false classification results in the uncertain super-pixel patches of OBA image are transformed back to raw3D laser points and a re-classification is conducted to refine3D scene understanding for UGVs.The classification results can’t achieve global optimum relying solely on local feature. We present a novel super-pixel based Conditional Random Fields (CRF) framework to deal with the contextual information of the scene. Super-pixel patches of OBA image are used as the basic nodes in CRF instead of raw3D points, so that the number of nodes in CRF can be significantly reduced and the real-time performance can be greatly enhanced. For the training and inference procedure, local shape feature, neighborhood distribution feature and OBA texture feature are employed to ensure the accuracy of classification results.Four3D point cloud datasets are applied to test our algorithms, including DUT1and DUT2from self-developed UGV platform,-KAIST from Korea, and New College from Oxford University. Experimental results and data analysis show the effectiveness for various types of data environment. Comparison with the results of existing methods proves higher real-time performance for the proposed methods. |