| Image blind super-resolution is a technique that studies how to reconstruct low-resolution images with unknown and variable degradation types.Due to factors such as shooting environment,capture equipment,and processing algorithms,some images may have low resolution,blurring,or noise,which can be reconstructed using deep learning-based superresolution methods.In recent years,deep learning-based image super-resolution methods have received extensive attention from researchers due to their excellent reconstruction capabilities.However,existing methods still have some drawbacks: 1.Many non-blind super-resolution methods assume that the low-resolution image is obtained from the high-resolution image by a single and fixed degradation method(such as bicubic downsampling).This assumption is too simple,and when the actual degradation type of the low-resolution image is inconsistent with the assumed type,the quality of the reconstructed image will be reduced to varying degrees;2.Existing blind super-resolution methods have huge and complex network structures and model parameters,making it difficult to apply them to realistic scenarios with limited computational resources.This article proposes a lightweight blind super-resolution method and its quantitative deployment to solve the aforementioned shortcomings,as follows:This article addresses the problem that the lightweight method FMEN cannot well reconstruct input images with complex and variable degradation types,and improves it into a method suitable for blind super-resolution reconstruction called FMEBN-GAN.First,we construct a complex degradation space consisting of various blur kernels,interpolation methods,and additive noise with uncertain probabilities.Then,we add a degradation estimation module to predict the degradation information in the input image,and introduce this prior information into the feature extraction and reconstruction module through dynamic convolution to guide and constrain the network’s reconstruction process.Finally,we use the improved structure as the generator and Unet-SN as the discriminator to form the FMEBN-GAN generative adversarial network,adding multiple losses for GAN training,to generate more detailed and realistic reconstructed images.Experiments show that the improved network is lightweight and more effective for blind super-resolution reconstruction tasks.To better apply FMEBN-GAN in scenarios with limited computing resources,this paper uses multiple quantization methods to optimize the network inference process and explore the quantization deployment plan of this method on CUDA(Compute Unified Device Architecture)end: 1.Four post-training quantization(PTQ)strategies are used to quantize the trained FMEBN-GAN model;2.Quantization aware training(QAT)is used to fine-tune the models of both FMEBN and FMEBN-GAN separately.3.The models with different quantization strategies are deployed and tested using Tensor RT.The experiment shows that using PTQ strategy with cross-layer equalization and operator scheduling to quantize the model can achieve lower inference latency while preserving the model accuracy as much as possible. |