| The electroencephalogram (EEG) is results of the millions of the brain nerve cells performed together, and the brainwaves include a variety of information of the nerve system. Music is generated from the activities of the human brain, and it has very important influence in both body and mentality. The relations between brain and music, music perception, and the influence mechanism of the music to the brain, are hot topics in neuroscience and psychology. Since there is similarity between the brainwaves and music signals, it is feasible to translate the EEG into music for analysis and investigation. This paper proposed two kinds of methods for the translation to make the music express the state of the brain. The EEG of different sleep states is used in these methods. Besides, different from the usual visual way, it is a new way to monitor the EEG signal by transforming the EEG into music.The first method proposed in the paper is quite a direct method. We set up the mapping rules from EEG properties to music elements according to the Fechner's Law and the scaling properties which were identified in both EEG and pleasurable music, i.e. amplitude-pitch, period-duration, average power-volume. The results demonstrate that the music of different sleep states have unique characters in pitch, duration and volume. The music generated from extensive mental activities that have high frequency and low amplitude show low pitch and short duration; while the music from weak mental activities have high pitch and long duration. The inverse application of the algorithm allows translation from the music back to the original EEG waveform, thus it provides a meaningful music coding and record for the EEG signals.The second method proposed in the paper is a indirect method. The time-frequency information is extracted from the EEG data by the wavelet analysis. Then according to the automatic composition method in the field of current computer music, the information are used to define the period, the pitch range and the rhythm pattern, to generate the chords and melody based on the melodic development theory. The results demonstrate that the music pieces from different sleep states have different characters in pitch range, rhythm patterns and melody, and are more musical than those generated by the first method. The music generated from extensive mental activities shows dense rhythm and high pitch range which sounds vivaciously and has ample varieties; while the music from weak mental activities has low pitch range and slow rhythm, which expresses a feeling of deep and comfortable. |