Font Size: a A A

The bi-level input processing model of first and second language perception

Posted on:2011-09-24Degree:Ph.DType:Dissertation
University:University of Victoria (Canada)Candidate:Grenon, IzabelleFull Text:PDF
GTID:1445390002450158Subject:Education
Abstract/Summary:
The focus of the current work is the articulation of a model of speech sound perception, which is informed by neurological processing, and which accounts for psycholinguistic behavior related to the perception of linguistic units such as features, allophones and phonemes. The Bi-Level Input Processing (BLIP) model, as the name suggests, proposes two levels of speech processing: the neural mapping level and the phonological level . The model posits that perception of speech sounds corresponds to the processing of a limited number of acoustic components by neural maps tuned to these components, where each neural map corresponds to a contrastive speech category along the relevant acoustic dimension in the listener's native language. These maps are in turn associated with abstract features at the phonological level, and the combination of multiple maps can represent a segment (or phoneme), mora or syllable. To evaluate the processing of multiple acoustic cues for categorization of speech contrasts by listeners, it may be relevant to distinguish between different types of processing. Three types of processing are identified and described in this work: additive, connective and competitive.;The way speech categories are processed by the neurology in one's L1 may impact the perception and acquisition of non-native speech contrasts later in life. Accordingly, five predictions about the perception of non-native contrasts by mature listeners are derived from the proposals of the BLIP model. These predictions are exemplified and supported by means of four perceptual behavioral experiments. Experiments I and II evaluate the use of spectral information (changes in F1 and F2) and vowel duration for identification of an English vowel contrast ('beat' vs. 'bit') by native North American English, Japanese and Canadian French speakers. Experiments III and IV evaluate the use of vowel duration and periodicity for identification of an English voicing contrast ('bit' vs. 'bid') by the same speakers. Results of these experiments demonstrate that the BLIP model correctly predicts sources of difficulty for L2 learners in perceiving non-native sounds, and that, in many cases, L2 learners are able to capitalize on their sensitivity to acoustic cues used in L1 to perceive novel (L2) contrasts, even if those contrasts are neutralized at the phonological level in L1. Hence, the BLIP model has implications not only for the study of L1 development and cross-linguistic comparisons, but also for a better understanding of L2 perception. Implications of this novel approach to L2 research for language education are briefly discussed.
Keywords/Search Tags:Perception, Model, Processing, Language, Speech, Level
Related items