| Motor vehicle drivers and vehicle vision terminals are susceptible to illumination and sensor properties due to low-visibility conditions such as nighttime and haze weather.The distance and sharpness of their observations are affected,resulting in a smaller visible distance,a lower observation ability,and it is difficult to observe traffic conditions outside the illuminated area.Therefore,it is difficult for the driver to judge the surrounding environment and discover the pedestrians in the road in time.This is one of the important technical obstacles in current driverless cars and driver assistance systems.Since the current night vision sensing devices use grayscale images,the research on grayscale image colorization is the foundation for night vision colorization.Through computer vision and deep learning technology,the multi-source image signals collected by the on-board camera are processed,analyzed and understood,so that the night vision image achieves the same performance as the daytime,increasing the visibility of the driving conditions.On this basis,the detection and recognition of pedestrians in the visible range can effectively avoid the occurrence of traffic accidents and improve the reliability of driverless technology or other advanced driver assistance systems.This paper mainly studies colorization methods in low visibility conditions such as night vision and haze weather.It aims to establish a colorization theory and method for road traffic under low visibility conditions,laying the foundation for the subsequent research on motor vehicle nighttime driverless driving and driving assistance technology.This paper intends to solve the grayscale image colorization problem for night vision or other low visibility conditions in a four-pronged approach.First,through the fusion of visible images and infrared night vision images,the road targets are colorized accordingly.This kind of method needs to fuse two image sources under the same window,so that more available information from multi-source images can be effectively obtained,and the available information is complemented and utilized.In the same scene,multi-source images can better interpret image content due to the advantage of acquiring more information than a single source image.Secondly,for low-light night vision and other scenes where color visible light cannot be collected,colorization of grayscale image can be considered.Target detection and tracking operations can then be performed on the processed image.Such methods can be generalized as model-based grayscale image colorization techniques,focusing on creating models based on information such as image structure,target type,and image understanding.Thirdly,based on the colorization model,a corresponding colorization model is established for each target scene.The scene type of the image is determined by the scene classification code matching method.Furthermore,the gray image colorization can distinguish the colorization styles according to different scenes.Finally,in order to increase the accuracy and efficiency of the algorithm while expanding the training samples,we use seed points as tensor for diffusion coloring and optimization process.After the target grayscale image is colored by a few superpixels in the above-mentioned colorization model,the color seed point is diffused to the global range to achieve the whole grayscale image colorization.In this paper,a large number of related literatures are reviewed and analyzed.On this basis,research is carried out in four directions: multi-source image fusion,night vision image colorization,scene-guided gray image colorization and optimized color diffusion for colorization.The main work of the paper includes:1.Establish a color image fusion framework.In color image fusion,there are color distortion caused by spatial transformation and strong correlation between channels in RGB color space.At the same time,image fusion algorithm based on principal component analysis has the disadvantages of low image structure utilization and loss of spectral information.An image fusion framework based on improved two-dimensional principal component analysis was designed to cope with these problems.For the structural characteristics of RGB color images,the two-dimensional principal component analysis is performed using the RGB components of each row and column direction as a primitive components in the image to be fused.The fused image is reconstructed by using the covariance-based linear weight assignment method.The principal component is replaced according to the structural characteristics of the reconstructed image,and the image fusion processed by weighted inverse transform based on covariance.2.Train the image colorization model.Different from the image coloring method based on multi-source information fusion,training image colorization model of gray image can complete the colorization without providing any additional information.The main idea is to convert these color images in the training set into grayscale images,and then extract the121-dimensional features for each pixel of the transformed grayscale image.Then,the121-dimensional feature corresponding to each pixel is used as the input layer,and the RGB pixel value of the corresponding one in original color image is the output layer,and an error back propagation neural network is established.Finally,the image colorization model is converged by inputting a large number of training samples and iteration.3.Study the scene-guided image colorization method.The corresponding colorization model is trained for each type of scenes using the proposed colorization network model.In order to improve method performance,the input and training images are further classified into different scenes.A linear image classification method is used to generate a scene guide codebook for various types of scenes,and thereby determine a scene class of the input target grayscale image.Then,using its corresponding colorization model trained previously to achieve colorization.4.Design color seed point propagation strategy and optimization method.In the stage of training and testing model,the image colorization model traverses all sampled pixels in the training set.As the training samples increase,it is easy to produce two adverse consequences:First,it brings huge computational cost increase,also increases the training duration.Second,this will increase the error of the convergence model after training.Ignorance of screening training samples can result in a large amount of interference and duplication,which increase the ill-posed and over-fitting of the network.Therefore,while the computational cost increases,the accuracy of the model does not improve.In this paper,the training and test samples are first sub-pixel segmented,and the geometric center point of each superpixel is trained as the operation unit.On this basis,a color seed point diffusion strategy is proposed,and the optimization method for solving this strategy is given,which can reduce the computation cost and increase the accuracy of the colorizing model. |