| As people began to pursue a higher level of spiritual and cultural life,dance has gradually become a way to enrich people’s life.The beauty and visual expressiveness brought by dance attract people to use dance to keep healthy and show themselves.However,dance as a form of artistic expression,the display of its beauty depends on the choreography of the dance to a large extent.Choreography generally requires professional choreographers to design.From listening to music to composing an entertaining dance,it often takes a lot of energy.The application of artificial intelligence algorithms to the field of choreography can automatically generate a smooth dance,which helps choreographers to choreograph faster and more efficiently.The rapid development of deep learning provides technical support for dance generation task.How to compose a dance matching with the music and show the visual effect of the dance is the key of the current research.At present,the dance generation task still has the problems of discontinuous dance movements and incomplete video images.To solve the above problems,this paper proposes a choreography module which uses the bidirectional Long Short-term Memory Networks as the main structure to deal with the repetition of movements and the problem of lapping,which can produce a dance with continuous movements and high degree of completion.At the same time,this paper will combine the arrangement and choreography,and finally generate a video with music innovation and dance innovation.The main work of this paper includes the following three aspects:A data set of K-pop dance movements is constructed.K-pop dance is a kind of popular dance,and its data comes from the network.Due to the large difference of dancer images in each video data,and in order to avoid the interference of the original video background on the training data process,this paper adopts the Openpose model to extract the dancer skeletons from the video,and adopts the unified human skeleton images with 18 key points as the training data.The total length of video in the data set is one hour and thirty-four minutes,with a total of 112,816 frames.Put forward the method of combining arranger and choreography task.This system consists of two parts,arranger and choreography module.First of all,the arrangement module is mainly used to train the music data and generate new music by using the original music elements.The arrangement module is constructed by using original Long Short-term Memory Networks,which can predict a new piece of audio,and in the process of prediction,the pitch and frequency of the audio are specially processed,so as to produce more pleasant music.Secondly,the choreography module is mainly used to produce a dance video with smooth movements.The module selects the bidirectional Long Short-term Memory Networks as the main structure of the generation model,extracts the characteristics of the input dance sequence and reintegrates them,so as to predict a new dance sequence.This paper chooses to establish a corresponding relationship between the BPM of music and the FPS of dance video,so that the changes of dance movements are more in line with the musical beat.In order to make the video picture more complete and rich,this paper also adds a dance scene for the video.The image matting tool in the Paddle Hub library is used to extract the characters in the generated video,and then the background image is changed for each frame.After the background change of all the frames,a dance video with scenes added can be obtained,which is more ornamental than the monotonous background of the original video. |