Iris recognition is a biometric recognition technology for major national strategies.It has been widely used in access control,time attendance,mobile phone unlocking,criminal investigation and other industries related to national economy and people’s livelihood.It has important academic research value for national,public and personal property security issues.At present,the performance of the iris recognition system under the constraints of user status and acquisition equipment has been relatively stable,but it cannot adapt to complex and changeable real-life scenarios.Low-quality iris images collected in less-constrained scenes are susceptible to noise such as specular reflections,eyelash and hair occlusion,which significantly degrade the performance of these systems.Iris image segmentation is a key preprocessing step in iris recognition,which defines the eye feature extraction region and iris content,so it is of great significance in iris recognition systems.In less constrained scenarios,due to the reduced requirement for user cooperation,the collected iris images may contain a lot of noise,and iris segmentation becomes difficult and challenging.Therefore,this paper carried out the research on iris image segmentation algorithm,the main work is as follows:(1)Aiming at the problem that the iris image collected in a less constrained scene is easily disturbed by noise such as mirror reflection,eyelashes and hair occlusion,which makes it difficult to accurately segment the iris region,a noisy iris image segmentation method combining Transformer and symmetric codec is proposed.First,the Swin Transformer is used as the encoder,the block sequence of the input image is sent to the layered Transformer module,and the long-distance dependence between pixels is modeled through the selfattention mechanism to enhance the interaction of context information.Secondly,a Transformer decoder that is symmetrical to the encoder is constructed to perform multi-layer decoding on the extracted high-order context features.During the decoding process,it is connected with the encoder to perform multi-scale feature fusion to reduce the loss of spatial position information caused by downsampling.Finally,supervised learning is performed on the output of each stage of the decoder to improve the quality of feature extraction at different scales.Comparing experiments with other methods on different datasets,the experimental results show that the proposed method has achieved better segmentation performance than other methods in terms of quantitative evaluation indicators,and can effectively improve the performance of iris recognition,suggesting a good application potential.(2)Aiming at the problem that Transformer itself has a large amount of parameters and a high amount of calculation,a lightweight iris image segmentation method based on Transformer is proposed.First,the architecture combining deep separable convolution and Transformer is used as an encoder,and integrating CNN’s lightweight,efficient and Transformer’s self-attention mechanism to model contextual long-distance dependence and global vision.Secondly,the multi-layer perceptron is used as the decoder,and the feature maps of different scales extracted by the encoder are sent to the decoder for upsampling,and the resolution of the feature map is gradually restored to the original input size.Then the channel dimensions are spliced to fuse the feature information of different scales.Finally,the restored feature map is classified pixel by pixel by convolution operation,and the iris image segmentation result is obtained.The experimental results show that this method can maintain high accuracy while reducing parameters and computation. |