| There is growing interest in those factors which affect the test performance of the language learner nowadays. Some of this interest is motivated by a desire to detect and eliminate test features which are seen as distorting the tester's attempts to achieve accurate assessment of learners'language proficiency. A number of researchers distinguish between test features which are irrelevant to the ability which is being measured, and those which are relevant. It is important to discover which test features constitute significant sources of true variance in learners performance. One feature which has been shown to affect learners'performance on tests of spoken interaction is the gender. Up to date, the role of gender in speaking tests has received limited attention in language testing research. It is possible in oral English testing, that both interviewing and rating may be highly gendered processes.Especially in the tests where the interviewer also acts as the rater, this poses the question of whether a gender effect, if exists, stems from the interview itself, the rating decision or a combination of both these"events". The study examines the issue of gender on two levels: firstly, its impact on the discourse of the interview and secondly, its effect on the rating process. The data collected for this study consisted of the audio-taped performance of 8 female and 8 male test-takers on two different occasions, once with a female interviewer and once with a male interviewer. The interviews were transcribed and analysed in relation to previously identified features of gendered language use, namely overlaps, interruptions and minimal responses. The scores later assigned by 4 raters (2 males and 2 females) to each of the 32 interviews were also examined in relation to the gender of both raters and test-takers using multi-faceted Rasch bias analyses.Any conclusion drawn from this data must be speculative given the small dataset, furthermore, no single study will definitely establish that link between gender and oral English testing, each one takes us a step closer to a better understanding. From the analysis of the interviews it was found that there were some gender differences between female and male interviewers and candidates , but did not form a consistent gender pattern.Furthermore, the analysis of test scores indicated there was no evidence of significant bias in the rating process in relation to the gender of raters or candidates. Both sets of findings therefore suggest that gender does not have a significant impact on English oral testing.The limitations of the study are presented, and suggestions for further research are provided. |