| With the development of deep learning,the acquisition of high-quality data has become a bottleneck.To encourage models to reduce their dependence on large-scale data,"few-shot learning",which can achieve good performance with a small number of samples,has attracted widespread attention.This paper explores the key technologies of semantic segmentation based on few-shot learning from the fields of natural images and medical images,and analyzes various potential issues and research difficulties in current few-shot semantic segmentation.A series of solutions to these difficult issues are proposed,and the main contributions can be summarized as follows:1.A new lightweight few-shot semantic segmentation network is proposed.Based on the discovery that "the similarity of each region of the query set image itself is higher than the similarity of image features between the query set and the support set",a query set self-iteration module is designed.The module does not introduce extral parameters and iteratively updates the segmentation prediction results through the similarity measure.This network structure can achieve good segmentation performance with low complexity.2.To solve the problem that many few-shot segmentation methods based on prototype learning rely too much on prototype expression ability,resulting in poor model generalization and high sensitivity to data distribution,a data enhancement strategy based on Fourier transform is designed.Moreover,the data enhancement strategy can be inserted into various network structures as an independent module to achieve "plug and play" and has good portability.3.The research on the few-shot segmentation of medical images was carried out,focusing on exploring the cross-domain generalization ability of the network structure and data enhancement strategy mentioned above.Moreover,the problem of "misclassification of background latent classes"caused by the serious "foreground-background imbalance" in the medical images dataset was analyzed,and an adaptive dynamic threshold module was designed to further improve performance.The ablation experiments and comparison experiments on multiple public datasets demonstrate the effectiveness of the proposed algorithms. |