Font Size: a A A

Low-Light Image Enhancement Based On Retinex And Transformer

Posted on:2024-05-16Degree:MasterType:Thesis
Country:ChinaCandidate:J J YanFull Text:PDF
GTID:2568307079466014Subject:Electronic information
Abstract/Summary:PDF Full Text Request
The current state of low-light image enhancement algorithms is inadequate for ad-dressing issues such as low brightness,poor contrast,and loss of detail in images captured under complex lighting conditions such as darkness,obstruction,and nighttime.These problems severely impact human visual perception and subsequent processing for ad-vanced visual tasks.However,existing algorithms still suffer from several issues,includ-ing noise,artifacts,halo,color bias,and over/under-exposure.This study aims to solve these problems by investigating low-light image enhancement algorithms from two per-spectives:the traditional Retinex physical model and data-driven methods,with the goal of recovering well-lit,vividly colored,and fully detailed images from various complex low-light scenarios.Specifically,the study proposes a coordinated light-enhancement algorithm from the traditional Retinex model to address the problem of excessive halo and over-enhancement of well-exposed regions.The algorithm introduces a light-coordination term G and a com-bination of L1and L2norm constraints to suppress artifacts and preserve image structure.The proposed model is solved using an alternating direction multiplier method and fast Fourier transform,which greatly improves the computational efficiency.The results of extensive experiments show that this method outperforms PIE on the LOL dataset,with the average Peak Signal to Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)increasing by 0.52 d B and 0.03,respectively,and a speed improvement of 9.24%on the 2K high-resolution VV image dataset.Additionally,the study proposes a data-driven low-light image enhancement algo-rithm based on Transformer networks.Unlike convolutional neural networks with limited receptive fields,Transformer can capture global information more effectively,and Swin Transformer enhances local information capture.The proposed algorithm uses the self-attention mechanism of Transformer to construct an image enhancement network compris-ing a shallow feature extraction module,a deep feature extraction module,and a residual network module.The shallow feature extraction module is implemented using a convolu-tional network,while the deep feature extraction module comprises symmetric multi-level encoders/decoders that utilize the Swin Transformer as the primary structure to achieve multi-scale feature extraction and image enhancement.Compared to the suboptimal meth-ods,the proposed method improves the average PSNR and SSIM by 1.94 d B and 0.05,respectively,on the LOL dataset,and by 5.98 d B and 1.19,respectively,on the MIT-Adobe Five K dataset.
Keywords/Search Tags:Low-light Image Enhancement, Retinex Theory, Transformer, Swin Transformer
PDF Full Text Request
Related items