Why narrowing the speaking assessment focus can have a positive washback effect on L2 learning

download (10)

Every assessment we carry out in an MFL classroom ought to have a positive washback effect on learning. In this post I argue that with pre-intermediate to intermediate students the way MFL learners are typically assessed does not impact learning as much as it ought to, due to a failure to consider the complex nature of oral skill acquisition and the cognitive demands it poses on learners.

What I mean is that most often learners are assessed using complex multi-traits or holistic scales which are designed to rate their performance across a number of dimensions of proficiency such as fluency, intelligibility,  grammatical accuracy, pronunciation, complexity, range of vocabulary, ability to comprehend and respond to an interlocutor, etc. However, this approach has the potential to have a negative washback effect on learning when we deal with novice to intermediate learners due to the huge cognitive demands it puts on them.

This is because, as I have often reiterated in my posts, at this level of proficiency foreign language learners struggle to cope effectively with all of the demands posed by oral production in real operating conditions. Hence, by assessing them using multi-traits or holistic scales which assess simultaneously all of the above components of oral proficiency we are being hugely unfair to them, as we are not taking into account how finite their cognitive resources are.

Although there is indeed a place for a more multi-dimensional type of assessment in high-stake tests (e.g. end of unit tests), when it comes to the all-important low-stake tests we should administer throughout the learning cycle, I advocate a different approach which takes into account developmental factors in the acquisition of cognitive control, i.e. a type of assessment which focuses on one or maximum two traits at a time. For instance, at one key stage in the unfolding of a unit of work one would focus on the assessment of fluency + intelligibility of output; at another one would focus on range of vocabulary and pronunciation; etc. Obviously, your students will be informed at all times as to which trait will constitute the focus of the forthcoming assessment; this will channel their cognitive resources in one or two directions thereby pre-empting the risk of them chasing too many rabbits at the same time and ending up catching none.

This approach, which I have been using for years, not only has the advantage of focusing the learners on one aspect of cognitive control over oral production at the time – with an obvious positive washback effect on learning; but addresses another important pitfall of oral performance assessment carried out using complex multi-traits scales: the cognitive overload such scales cause oral test raters. Unless raters record the students’ oral performances and listen to them over and over again after the test– which rarely happens with low-stake assessments – complex assessment scales are very likely to cause divided attention, as it is extremely challenging to attend to speaker output and evaluate it at the same time across all of the traits and criteria.

In this sense, low-stake speaking tests assessed using a narrow focus approach do kill two birds with one stone. On the one hand, they optimize the use of the student’s cognitive resources; on the other, they facilitate the test-rater’s task. I will add another advantage I have experienced whilst using this approach, which refers to my professional development: by focusing on a different aspect of oral proficiency at a time for each low-stake assessment one gains a higher level of awareness vis-à-vis the variables affecting its development than one would normally do when focusing on several oral proficiency components at the same time.

It goes without saying that in high-stake assessments the use of multi-traits or holistic rubrics is more useful as we do want to have a more comprehensive view of how our students are faring along all the major components of oral proficiency. However, I do feel that many of the holistic and analytical scales adopted by MFL teachers with novice to intermediate speaker learners share a common shortcoming, which has a detrimental washback effect on learning: they do not lay enough emphasis on fluency and on the ability to effectively comprehend and respond to an interlocutor. In their quest for comprehensiveness and for a ‘one fits all’ solution, they fail to consider that each level of proficiency has different developmental features and should therefore be approached differently in terms of assessment. The more novice the learner the more skewed towards fluency the scale should be, the emphasis on accuracy and complexity gradually increasing as we progress further along the language acquisition continuum.