| The super-resolution reconstruction of remote sensing images is one of the long-term goals pursued in the field of remote sensing,and it is also an important part of the research in the field of computer vision.The task of clear reconstruction of marine remote sensing ship images not only has many application scenarios in civil search and rescue,energy,fishery,transportation and other fields,but also plays a basic and important role in military reconnaissance and land exploration.The rapid development of deep learning related technologies in recent years has resulted in more and more new super-resolution reconstruction algorithms appearing at the software level,and they have achieved good practical results.However,the existing deep learning super resolution reconstruction methods also have problems such as inaccurate restoration of image brightness space,image detail texture distortion or excessive sharpening,etc.,and it is difficult to achieve accurate restoration of target details under the interference conditions of cloud and fog when processing remote sensing images.This paper proposes a super-resolution reconstruction method based on the combination of generative adversarial network and capsule network to conduct a relatively sufficient investigation and research on the above-mentioned shortcomings in the current deep learning super-resolution reconstruction.In order to realize the super-resolution reconstruction of single-frame remote sensing images,this paper mainly completed the following research:1)Improve SRGAN network structure.Aiming at the difficulties in remote sensing images such as blurred targets under clouds and fog,this paper based on the existing reconstruction algorithms to study the problems of SRGAN,which is more effective for visual perception,and make corresponding improvements.In the generator part,an enhanced feature extraction module with a deep residual-density network is used.In the discriminator part,a capsule network is used instead of a pure CNN network.2)Propose a new loss function.Relying on the feature vector of the capsule network,this paper proposes a vector loss function for network fitting at the multi-dimensional feature level,and reverse training is performed by comparing the multi-dimensional feature vector of the capsule.3)Use an improved capsule network to further improve network performance.This paper studies the principle and framework of the capsule network.The dual-routing process is used to replace the original single-routing process capsule,which realizes the improvement and expansion of the capsule model,and deepens the structure and vector dimension of the capsule network.The method proposed in this paper combines the in-depth expression of SRGAN’s perception of image reality and the expression of spatial features of the capsule network,forming a more refined expression of image feature details.The improvement of dual routing further enhances the performance of the capsule network,enabling it to obtain more accurate classification capabilities while maintaining stable performance,and enhance the ability to process complex images.The experimental results based on the above work show that the PSNR index of the method proposed in this paper is increased by 0.14(dB)compared with the previous methods,and the generator can produce results that are more realistic in natural texture performance and closer to the real image distribution.On the whole,our method achieves a better look and feel in the restoration of color and texture details. |