| With the passage of time,artificial intelligence technology has experienced rapid development.Computer generated works of art have gradually received widespread attention.Music generating dance is a cross-domain research,using machine learning to extract music features and generate corresponding dance actions.As dance and music express the same emotional tone,we aim to model the process of generating dance from music,since any two related data sets can be transformed into each other to some extent.This study utilizes deep learning algorithms and various learning strategies to generate music for dance,ensuring realistic,smooth,and natural movements that match the music.Specifically,the article explores strategies for training with multiple features and single music style datasets and designs a deep learning model for the task.A certain amount of experiments have proven the effectiveness of the proposed strategy.In this study,we consider that both music and dance have multidimensional characteristics that interact and together constitute a mapping relationship between music and dance.In order to better learn this relationship,we first proposed a learning framework based on generating adversarial networks(GAN),which uses a multi feature fusion strategy to capture the relationship between music and dance.In this framework,we integrate the structural,stylistic,and rhythmic characteristics of music to more comprehensively reflect the essence of music.Structural features reveal the overall organization and form of music,stylistic features describe the categories and genres to which music belongs,and rhythmic features represent the rhythm and dynamism of music.Together,these features provide rich information for learning the mapping relationship between music and dance.In implementing this learning framework,we have adopted the Generation Counter Network(GAN),which has proven its ability to generate realistic samples in many fields.To ensure consistent and authentic style during dance generation,we introduced two discriminators into the framework.The first focuses on style constraints,ensuring that generated dance actions match the style of a given piece of music.The second focuses on authenticity constraints,ensuring that generated dance movements look natural,smooth,and realistic.In order to further improve the smoothness and authenticity of generated dance,we designed a music generated dance model using Transformer architecture for single style music data training based on a decomposition and reorganization strategy.The core idea of the decomposition and reorganization strategy is to decompose music and dance into basic units,and achieve the transformation from music to dance by learning the mapping relationship between the basic units.In this model,we use the Transformer architecture to fully utilize its advantages in processing sequential data.At the same time,in order to make the generated movement style more distinctive,we use a single style music dataset training strategy.We first divide the music and dance dataset into subsets of different styles.Next,we train an independent Transformer model for each subset of styles to establish a mapping relationship between specific styles of music and dance. |