| The dynamic range of the imaging sensors denotes its ability to capture bright and dark targets on the scene.At the same time,it is restricted by hardware conditions and manufacturing costs.The dynamic range that can be recorded by a single shot is much narrower than that of human eyes,which makes the image look and feel unsatisfied.Therefore,the software-based dynamic range extension technology is proposed to greatly enhance the imaging quality without significantly increasing the hardware cost.If the dynamic information in the scene is collected effectively and displayed as much as possible on the display with a limited dynamic range,it has become a research focus in this field.In the past few years,a large number of related methods have been proposed.However,limited by the efficiency of imaging and computational cost,only the multi-exposure fusion has been widely used,but the multi-exposure feature extraction based on traditional image operators were challenged as it needs to manually design the structure and scene parameters,and the artifacts of ghosting are easily produced when objects in the scene are moved,and the imaging quality still needs to be further enhanced.In response to the above problems,this thesis builds a model for feature extraction and fusion of multi-exposure images based on the learning theory represented by neural networks,which has made a lot of breakthroughs in recent years.The main research content of this thesis includes the following two points:1.Based on the convolutional neural network,a multi-branch channel feature extraction model for multi-exposure images is constructed.In the research process,the related research results of the previous feature extraction and processing network are analyzed to suppress the training effect and the gradient inversion phenomenon that produces halo at the edge of the image high-contrast object is analyzed.Maintain the image anisotropy convolution block and build a multi-branch channel feature extraction model.The work of this thesis can ensure that the model can effectively extract the corresponding feature information of the exposed image during the training process.2.Based on the feature extraction model,a multi-scale exposure image feature reconstruction and fusion model was built through the encoder-decoder structure.Based on the theoretical basis of the receptive field,the multi-exposure feature is studied to reconstruct and blend the local features and the global features at multi-scale resolution.At the same time,a fusion strategy based on image structure and pixel intensity is proposed to strengthen the fusion effect while making the final result more in line with the visual senses.To verify the effectiveness of the above methods,this thesis uses different combinations of the proposed methods as a vertical comparison;a horizontal comparison with the current dynamic range expansion methods based on learning theory.Experimental results show that the method in this thesis can effectively extract the features of multi-exposure images for fusion,and the fusion results are superior to the existing methods in terms of visual perception,image structure and pixel intensity. |