Font Size: a A A

Research On Sound Generation Based On Underwater Target And Environmental Information Characteristics

Posted on:2021-09-17Degree:MasterType:Thesis
Country:ChinaCandidate:Q Q HeFull Text:PDF
GTID:2480306047498774Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,the social and economic development of countries around the world has been booming,and the overall strength of countries has also been improving,especially in terms of military strength.After years of efforts,China's Marine undertaking has,on the whole,entered the best stage of historical development.As the underwater target and environmental sound data of ocean data,it plays a very important role in Marine science research and national defense science and technology construction.However,due to the limited equipment and technical means,it is difficult to use the radiated noise of various underwater targets under various environmental conditions as the experimental training samples.In recent years,many remarkable achievements have been made in the field of speech feature extraction,and some deep learning speech generation technologies,such as online navigation broadcasting,have emerged.However,the application field of more mature technology is still focused on the generation of human speech,so how to further use deep learning to achieve underwater target and environmental noise generation has very important research value and significance.Due to the complexity and variety of underwater noise data sources,and the information in the data sources usually coexist in the form of underwater target radiation noise and environmental noise,the underwater noise data preprocessing has brought some difficulties.To this end,this paper studies and proposes a feature dictionary based on auditory attention mechanism that can distinguish the radiated noise and other noises of underwater targets and establish the correlation between underwater targets and features.Firstly,the auditory significance calculation model is used to calculate the auditory significance graph of underwater targets,and then the auditory significance graph is used as the prior knowledge of auditory attention to guide the convolution feature extraction,and finally a feature dictionary of underwater acoustic signal based on auditory attention mechanism is generated.After in building characteristics of underwater acoustic signal dictionary,for the existing model of speech production cannot be achieved to underwater target and environment characteristics of fitting problem,based on the characteristics of underwater target and environment information sound generation model,the entire network model mainly divided into SEQ2 FEA based encoder and decoder based on causal expansion convolution and converter three parts.The sound generation process based on underwater target and environment information can be divided into the following stages.Firstly,based on the input underwater target id and environment id,the encoder finds the corresponding feature matrix in the feature dictionary,then extracts the high-level features in the feature matrix for feature fusion,and converts them into a trainable internal vector form and transmits them to the decoder.The decoder USES the meir representation of the predictive feature of causality convolution and takes meir representation as the input of the converter network to predict the generation parameters of the vocoder.Finally,the acoustic waveform based on underwater target and environment information is generated.Through continuous research on the theory and technology of underwater target and environmental noise fitting,as well as further verification by relevant experiments,this paper proposes a series of improvements to the sound generation technology based on deep learning,making further achievements in the noise fitting method based on the characteristics of underwater target and environmental information.
Keywords/Search Tags:Deep Learning, Underwater target/environment characteristics, Feature Dictionary, Sound Synthesis Model, Convolutional Neural Network
PDF Full Text Request
Related items