Font Size: a A A

The Research Of Music Generation Based On Generative Adversarial Networks

Posted on:2020-01-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y QiuFull Text:PDF
GTID:2415330596475117Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,with the development of deep neural networks,especially proposing the generative adversarial networks(GAN),the academic community has made many significant advances in the research of generating images,videos and texts.Therefore,scholars also have made similar attempts to generating music.However,there are some differences between generating music and generating images and text: 1.Music is an art of time,being developed over time.So it is necessary to build time models in the work of music generation.2.In general,a musical piece consists of a variety of instruments,or multiple tracks.And each track has its own time dynamics,tracks being interdependent and closely related over time.3.The target output for generating symbol domain music is a sequence of discrete music events,rather than continuous values.All in all,music generation has more challenges than image and text generation,which leads to relatively less research on music generation in academia.Therefore,this thesis will study the existing related work and technology,and then propose a music generation method based on generative adversarial networks to enrich the research in the field of computer music generation.The research topic of this thesis is the music generation based on the generative adversarial networks,and the main content is as follow:1.Firstly,this thesis studied the basic music theory and the representation of music in the computer,especially the two formats of MIDI and Piano Roll.After learning the generative adversarial networks,and its latest derivative model CT-GAN,this thesis proposed a music generation model based on CT penalty,MCT-GAN.2.After learning the existing music generation model based on generative adversarial networks,,a time structure model for maintaining music coherence is proposed,which avoids the manual input and ensures the interdependence between tracks during music generation.At the same time,this thesis studies and implements the generation method of discrete music events based on multi-track,including multi-track interdependence model and discretization processing.3.This thesis studied the Lakh MIDI dataset.And the LMD-pianoroll dataset,which was used in the MCT-GAN music generation experiment,is obtained after thepreprocessing of Lakh MIDI dataset.At last,the MCT-GAN results are compared with the MuseGAN results,showing improved results of MCT-GAN.
Keywords/Search Tags:Music Generation, Multi-track, CT-GAN, Generative
PDF Full Text Request
Related items