The evaluation of oral proficiency has been transformed into language education policy in numerous contexts and institutions, such as evaluating student teachers’ oral proficiency before accessing training programs, or assessing proficiency before admission into fellowships for study abroad.
In many States the ACTFL Oral Proficiency Interview (OPI) is a requirement for admission to the WLE programs. Faculty is usually trained in their respective languages. The OPI is probably the best instrument we currently have for such assessment. Yet its reliability is questionable, and it does not produce the context of a genuine conversation (Young, 2001; Johnson, 2001). Students are well informed of the OPI, which is required before student teaching. Some fail their OPI but are able to leap two levels in two weeks, from Intermediate Mid to Advance Low—which normally is impossible. Inquiries into such issues may reveal that one evaluator had explained the test structure and how to handle the test situation. They had been ‘taught to test’ (Sacks, 2000). Other students may reach advanced proficiency levels during an OPI conversation however are unable to sustain their level when interacting in front of a class while teaching the target language.
Thus the conditions for the test are not conditions that relate with the specific context in which proficiency will be needed in teacher education or in the profession. Proficiency is evaluated in the nice, cozy chat context of a private conversation in which students can choose their topics—NOT in the highly stressful context of a classroom with constant disruption, subtle decision making, an imposed topic that may differ every 5-7 minutes, and multiple frontal group interactions. Since optimal decision-making seems to require the mental use of the mother language (Lantolf & Thorne, 2006), it is very difficult for student teachers to progressively handle this type of professional situation.
While the policy of requiring a specific language level or threshold can be sound, its decontextualized implementation may not always make sense. Overall it makes the OPI assessment an expensive instrument, costly in terms of the evaluators’ training as well as for students’ evaluations, an instrument that is off-track vis-à-vis the real needs in the field. It is an omnipresent requirement without much professional meaning. This raises the issue that policies are often considered or created out of context, and their implementation may create new problems. Multiple forms of evaluation by a variety of stake holders are necessary for an in-depth account of professional skills.