| How the human brain integrate information from different sensory modalities, or sub-modalities within one sensory modality, to form a coherent perception is known as multimodal integration. This process was traditionally regarded as a bottom-up process independent of top-down control. In the past decade, it has been shown that selective attention can influence multimodal integration in a complicated way and their interaction can happen as early as the processing stages in the sensory cortices.Here we employed a steady-state evoked potential (SSEP) technology to study multimodal attention. Different from the widely used event-related potential technology, SSEPs provide a continuous measurement of brain activities over the corresponding sensory cortices. More importantly, the separability of amplitude and phase responses of SSEPs enables us to study the neural mechanisms of multimodal attention in detail.This thesis is divided into two parts, the neuroscience part and the neuroengineering part. In the neuroscience part, we focus on three aspects: the neural mechanisms of 1) crossmodal modulation on sensory cortices and 2) crossmodal spatial links; 3) how the brain distributes attentional resources across modalities. The results showed: the amplitude and phase responses of steady-state visual evoked potentials (SSVEPs) were modulated by transiently presented auditory events, but with different time courses; spatial attention in auditory modality was spread to visual modality, leading to enhanced SSVEPs at the spatially attended location; when attention was shifted between modalities, the SSEPs from vision, audition and touch were all modulated. We concluded that: 1) sensory cortices were not unisensory but multisensory since they responded to inputs from other sensory modalities; 2) the modulation effects of multimodal attention were expressed by both amplitude and phase responses of recorded EEG; 3) Both supramodal and modality-specific attentional resources existed and they contributed to information processing at sensory cortex level.In the neuroengineering part, we employed two types of brain-computer interface (BCI) systems on the basis of our findings in the neuroscience part. The operating principles of these two BCI systems were the modulation of SSEPs by multimodal attention, one based on attention between tactile and visual modalities and another based on non-spatial attention to the combination of different sub-modalities within visual modality. More imporantly, these two BCI systems were categorized as‘independent BCIs’since they only utilize the top-down attention mechanism and do not reply in any way on the brain’s normal output pathways. Therefore, these systems are potentially useful for severely motor disabled patients. |