| In low light conditions such as cloudy days,indoors and object occlusion,images captured by imaging devices often suffer from low brightness and contrast,severe loss of detail information and the presence of a lot of noise.Such images not only interfere with the judgment ability of people,but also prevent the machine vision system from extracting the key features of the images.Low-light enhancement can effectively improve the image quality and make the image have richer texture details,which is of great significance and application value.Low-light images have made rapid progress in recent years,and many excellent algorithms have emerged,yet the enhanced results of existing methods still suffer from color distortion,lack of contrast,over-enhancement and blurring,causing visual discomfort.In order to better improve the visual effect of images,this paper combines the knowledge related to convolutional neural networks to investigate the low-light image enhancement.The main work of the article includes:(1)A progressive multi-stage low-light image enhancement network is proposed,which consists of three stages,including two feature extraction stages and one feature enhancement stage.Each two stages are connected by a supervised attention module and a crossstage feature fusion module.Before entering the first stage,the input low-light image is decomposed into high and low frequency components using Laplace pyramids.Then,the image components after Laplace pyramid decomposition are extracted with deep features using the residual attention module and the encoder-decoder in the first two stages.Immediately after,in the feature enhancement stage,the context information extracted in the previous stage is fused with the feature information in this stage to generate clearer highresolution features.Finally,the enhanced image is output.In this paper,extensive experiments are conducted to demonstrate the good expressiveness and adaptability of the proposed method using several publicly available low-light datasets.The experimental results demonstrate that the method proposed in this paper outperforms other methods in both objective assessment and visual analysis.(2)A low-light image enhancement network based on attentional multichannel feature fusion is proposed.In the specific network structure,a feature extraction model is first used to acquire deep features of the downsampled low-light images and fit them to an affine bilateral grid.Next,an attention-based residual dense block is used to focus on more details and spatial information.Meanwhile,the network considers all color channels and uses a feature reconstruction model to linearly interpolate channel features and bilateral grids to obtain high-quality features containing rich color and texture information.Then,the feature fusion module is used to fuse the features containing different information,and the enhancement model is used to further recover the textures and details in the image.Finally,the enhanced image is output.The method is effective in improving the quality of low-light images and performs superiorly on multiple public datasets.Numerous experimental results show that the method of this paper achieves better results in both quantitative and qualitative aspects compared with existing methods. |