Crucial issues in the assessment of speaking and writing (Part 1)


In the last few weeks I have been thinking long and hard about the assessment of the productive skills (speaking and writing), dissatisfied as I am with the proficiency measurement schemes currently in use in many UK school which are either stuck in the former system (National Curriculum Levels) or strongly influenced by it (i.e. mainly tense driven)

However, the challenge of finding a more effective, valid and cost-effective alternative for use in secondary British schools like is no easy task. The biggest obstacles I have found in the process refer to the following questions that have been buzzing in my head for the last few weeks, the answers to which are crucial to the development of any effective approach to the assessment of proficiency in the productive skills

  1. What is meant by ‘fluency’ and how do we measure it?
  2. How do we measure accuracy?
  3. What do we mean by ‘complexity’ of language? How can complexity be measured?
  4. How do we assess vocabulary richness and/or range? How wide should L2 learner vocabulary range at different proficiency stages?
  5. What does it mean to ‘acquire’ a specific grammar structure or lexical item?
  6. When can one say that a specific vocabulary and grammar item has been fully acquired?
  7. What linguistic competences should teacher prioritize in the assessment of learner proficiency? Or should they all be weighted in the same way?
  8. What task-types should be used to assess learners’ speaking and writing proficiency?
  9. How often should we assess speaking and writing?
  10. Should we assess autonomous learning strategy use? If so, how?

All of the above questions refer to constructs commonly used in the multi-traits scales usually adopted by researchers, language education providers and examination boards to assess L2 performance and proficiency. In this post, for reason of space, I will only concern myself with the first three questions reserving to deal with the rest of them in future posts. The issues they refer to are usually acronymized by scholars as CAF (Complexity, Accuracy, Fluency) but I find the acronym FAC (Fluency, Accuracy, Complexity) much more memorable… Thus I will deviate from mainstream Applied Linguistics on this account.

  1. The issues

2.1 What do we mean by ‘fluency’ in speaking and writing? And how do we measure it?

2.1.1 Speaking

Fluency has been defined as ‘the production of language in real time without undue pausing or hesitation’ (Ellis and Barkhuizen 2005: 139) or, in the words of Lennon (1990), as ‘an impression on the listeners that the psycholinguistic process of speech planning and speech production are functioning easily and automatically’. Although many, including teachers, use the term ‘fluency’ as synonymous of competence in oral proficiency, researchers see it more as temporal phenomenon (e.g. how effortlessly and ‘fast’ language is produced). In L2 research Fluency is considered as a different construct to comprehensibility, although from a teacher’s point of view it is obviously desirable that fluent speech be intelligible.

The complexity of the concept of ‘fluency’ stems mainly from its being a multidimensional construct. Fluency is in fact conceptualized as:

  1. Break-down fluency – which relates to how often speakers pause;
  2. Repair fluency – which relates to how often speakers repeat words and self-correct;
  3. Speed fluency – which refers to the rate of speaker delivery.

Researchers have come up with various measures of fluency. The most commonly adopted are:

  1. Speech rate: total number of syllables divided by total time taken to execute the oral task in hand;
  2. Mean length of run: average length of syllables produced in utterances between short pauses;
  3. Phonation/time ratio: time spent speaking divided by the total time taken to execute the oral task;
  4. Articulation rate (rate of sound production) : total number of syllables divided by the time to produce them;
  5. Average length of pauses.

A seminal study by Towell et al (1996) investigated university students of French. The subjects were tested at three points in time: (time one) the beginning of their first year; (time 2) in their second year and (3) after returning from their year abroad (in France). The researchers found that improvements in fluency occurred mainly in terms of speaking rate and mean length of runthe latter being the best indicator of development in fluency. Improvements in fluency were also evidenced by an increase in the rate of sound production (articulation rate), but not in a major way. In their investigation, Towell et al. found that assessing fluency based on pauses is not always a valid procedure because a learner might pause for any of the following reasons:

  1. The demands posed by a specific task;
  2. Difficulty in knowing what to say;
  3. An individual’ personal characteristic;
  4. Difficulty in putting into words an idea already in the brain;
  5. Getting the right balance between length of utterance and the linguistic structure of the utterance.

Hence, the practice of rating students’ fluency based on pauses may not be as valid as many teachers often assume. As Lambert puts it: “although speed and pausing measures might provide an indication of automaticity and efficiency in the speech production process with respect to specific forms, their fluctuation is subject to too many variables to reflect development directly.”

2.1.2 Writing

When it comes to writing, fluency is much more difficult to define. As, Bruton and Kirby (1987) observe,

Written fluency is not easily explained, apparently, even when researchers rely on simple, traditional measures such as composing rate. Yet, when any of these researchers referred to the term fluency, they did so as though the term were already widely understood and not in need of any further explication.

In reviewing the existing literature I was amazed by how much disagreement there is amongst researchers on how to assess writing fluency, which begs the question: if it is such a subjective construct on whose definition nobody agrees, how can the raters appointed by examination boards be relied on to do an objective job?

There are several approaches to assessing writing fluency. The most commonly used in research is composition rate, which is how many words are written per minute. So for instance, in order to assess the development of fluency a teacher may give his/her class a prompt, then stop after a few minutes and ask the students, after giving guidelines on how to carry out the word count, to count the words in their output. This can be done a different moments in time, within a given unit of work or throughout the academic year, in order to map out the development of writing fluency.

Initial implications

Oral fluency is a hugely important dimension of proficiency as it assesses the extent to which speaking skills have been automatized. A highly fluent learner is one who can speak spontaneously and effortlessly, with hardly any hesitation, backtracking and self-correcting.

Assessing, as I have just discussed, is very problematic as there is no international consensus on what constitutes best practice. The Common European Framework of Reference for Languages, which is adopted by many academic and professional institutions around the world provides some useful – but not flawless – guidelines.( ). MFL department could adapt them to suit their learning context mindful of the main points put across in the previous paragraphs.

The most important implications for teachers are:

  1. Although we do not have to be as rigorous and pedantic as researchers, we may want to be mindful in assessing our students’ fluency of the finding (confirmed by several studies) that more fluent speakers produce longer utterances between short pauses (mean length of run);
  2. However, we should also be mindful of Towell and al.’s (1996) finding that there may be individuals who pause because of other issues not related to fluency but rather to anxiety, working memory issues or other personal traits. It is important in this respect to get to know our students and make sure that we have repeated oral interactions with them so as to get better acquainted with their modus operandi during oral tasks;
  3. In the absence of international consensus on how fluency should be measured, MFL departments may want to decide whether and to what extent frequency of self-repair, pauses and speed should be used in the assessment of their learners’ fluency;
  4. If the GCSE or A level examination adopted by their school does include degrees of fluency as an evaluative criterion– as Edexcel for instance does – then it is imperative for teachers to ask which operationalization of fluency is applied in the evaluation of candidates’ output so as to train students accordingly in preparation for the oral and written exams;
  5. Although comprehensibility is a separate construct to fluency in research, teachers will want their students to speak and write at a speed as close as possible to native speakers’ but also to produce intelligible language. Hence, assessment criteria should combine both constructs.
  6. Regular mini-assessments of writing fluency of the kind outlined above (teacher giving a prompt and students having to write under time conditions) should be conducted regularly, two or three times a term, to map out students’ progress whilst training them to produce language in real operating conditions. If this kind of assessment starts at KS3 or even KS2 (with able groups and ‘easier’ topics), by GCSE and A-levels, it may have a positive washback effect on learner examination performance.


Accuracy would seem intuitively as the easiest way to assess language proficiency, but it is not necessarily so. Two common approaches to measuring accuracy involve: (1) calculating the ratio of errors in a text/discourse to number of units of production (e.g. words, clauses, sentences, T units) or (2) working out the proportion of error-free units of production. This is not without problems because it does not tell us much about the type of errors made; this may be crucial in determining the proficiency development of a learner. Imagine Learner 1 who has made ten errors with very advanced structures and Learner 2 who has made ten errors with very basic structures without attempting any of the advanced structures Learner 1 has made mistakes with. To evaluate these two learners’ levels of accuracy as equivalent would be unfair.

Moreover, this system may penalize learners who take a lot of risks in their output with highly challenging structures. So, for instance, an advanced student who tries out a lot of difficult structures (e.g. if –clauses, subjunctives or complex verbal subordination) may score less than someone of equivalent proficiency who ‘plays it safe’ and avoids taking risks. Would that be a fair way of assessing task performance/proficiency? Also, pedagogically speaking, this approach would be counter-productive in encouraging avoidance behavior rather than risk-taking, possibly the most powerful learning strategy ever.

Some scholars propose that errors should be graded in terms of gravity. So, errors that impede comprehension should be considered as more serious than errors which do not. But in terms of accuracy, errors are errors, regardless of their nature. We are dealing with two different constructs here, comprehensibility of output and accuracy of output.

Another problem with using accuracy as a measure of proficiency development is that learner output is compared with native like norms. However, this does not tell us much about the learner’s Interlanguage development; only with what degree of accuracy she/he handles specific language items.

Lambert (2014) reports another important issue pointed out by Bard et al.(1996):

In making grammaticality judgments, raters do not only respond to the grammaticality of sentences, but to other factors which include the estimated frequency with which the structure has been heard, the degree to which an utterance conforms to a prescriptive norm, and the degree to which the structure makes sense to the rater semantically or pragmatically. Such acceptability factors are difficult to separate from grammaticality even for experienced raters.

I am not ashamed to say that I have experienced this myself on several occasions as a rater of GCSE Italian oral exams. And to this day, I find it difficult not to let these three sources of bias skew my judgment.

3.1 Initial implications for teachers and assessment

Grammatical, lexical, phonological and orthographic accuracy are important aspects of proficiency included in all the examination assessment scales. MFL departments ought to collegially decide whether it should play an equally important or more or less important role in assessment than fluency/intelligibility and communication.

Also, once decided what constitute more complex and easier structures amongst the structures the curriculum purports to teach for productive use, teachers may want to choose to focus in assessment mostly or solely on the accuracy of those structures – as this may have a positive washback effect on learning.

MFL teams may also want to discuss to what extent one should assess accuracy in terms of number or types of mistakes or both. And whether mistakes with normally late acquired, more complex structures should be penalized considering that such assessment approach might encourage avoidance behavior.

  1. Complexity

Complexity is the most difficult construct to define and use to assess proficiency because it can refer to different aspects of performance and communication (e.g. lexical, interactional, grammatical, syntactic). For instance, are lexical and syntactic complexity two different aspects of the same performance or two different areas altogether? Some researchers (e.g. Skehan) think so and I tend to agree. So, how should a students’ oral or written performance exhibiting a complex use of vocabulary but a not so complex use of grammar structures or syntax be rated? Should evaluative scales then include two complexity traits, one for vocabulary and one for grammar/syntax? I think so.

Another problem pertains to what we take ‘complex’ to actually mean. Does complex mean…

  • the number of criteria to be applied in order to arrive at the correct form‘ as Hulstijn and De Graaff (1994) posit? –In other words, how many steps the application of the underlying rule involves? (e.g. perfect sense in French or Italian with verbs requiring the auxiliary ‘to be’)
  • variety? Meaning, that, in the presence of various alternatives, choosing the appropriate one flexibly and accurately across different contexts would be an index of high proficiency? (this is especially the case with lexis)
  • cognitively demanding, challenging? Or
  • acquired late in the acquisition process? (which is not always easy to determine)

All of the above dimensions of complexity pose serious challenges in their conceptualization and objective application to proficiency measurement.

Standard ways of operationalizing language complexity in L2 research have also focused on syntactic complexity, and especially on verbal subordination. In other words, researchers have analyzed L2 learner output by dividing the total number of finite and non-finite clauses by sentential units of analysis such as terminal units, communication units, speech, etc. One of the problems with this is that the number thus obtained is just a figure that tells us that one learner has used more verbal subordination than another but does not differentiate between types of subordination – so, if a learner uses less but more complex subordination than another, s/he will still be rated as using less complex language.

4.1 Implications for teachers

Complexity of learner output is a very desirable quality of learner output and a marker of progress in proficiency, especially when it goes hand in hand with high levels of fluency. However, in the absence of consensus as to what is complex and what is not, MFL departments may want to decide collegially on the criteria amongst the ones suggested above (e.g. variety, cognitive challenge, number of steps required to arrive at the correct form and lateness of acquisition) which they find most suitable for their learning contexts and curricular goals and constraints.

Also, they may want to consider splitting this construct into two strands, vocabulary complexity and grammatical complexity.

Finally, verbal subordination should be considered as a marker of complexity and emphasized with our learners. However, especially with more advanced learners (e.g. AS and A2) it may be useful to agree on what constitute more advanced and less advanced subordination.

In addition, since complexity of language does appear as an evaluative criterion in A-level examination assessment scales, teachers may want to query with the examination boards what complexity stands for and demand a detailed list of which grammar structures are considered as more or less complex.


Fluency, Accuracy and Complexity are very important constructs central to all approaches to the assessment of the two productive macro-skills, speaking and writing. In the absence of international consensus on how to define and measure them, MFL department must come together and discuss assessment philosophies, procedures and strategies to ensure that learner proficiency evaluation is as fair and valid as possible and matches the learning context they operate in. In taking such decisions, the washback effect on learning has to be considered.

Having only dealt with three of the ten issues outlined at the beginning of this post, the picture is far from being complete. What is clear is that there are no clear norms as yet, unless one decides to adopt in toto an existing assessment framework such as the CEFR’s ( ). This means that MFL departments have the opportunity to make their own norms based on an informed understanding – to which I hope this post has contributed – of the FAC constructs and of the other crucial dimensions of L2 performance and proficiency assessment that I will deal with in future posts.


A little experiment with Padlet. Of teacher myths and learning reality

download (1)

I often read on the Internet or hear from ‘techy’ teachers of the presumed learning benefits of various Apps. More than often, when I investigate those claims with my students they are down-sized. The truth is that the way adults’ cognition – especially teachers’ – interacts with a learning tool often differs from an adolescent’s.

A few months’ back, one such claim was made in my presence. The context: an ICT training session in which one of the participants criticized Google classroom based on the fact that, unlike Padlet, when a student works on an assignment the other pupils cannot see what their peers are writing and consequently the opportunities for learning would be drastically reduced.

Reflecting on that claim on my way home I thought about myself, an avid and highly motivated learner of 8 languages – would I, whilst doing a Padlet task, actually actively look at my class mates’ output so as to learn from it? My answer was: possibly not, maybe a fancy idiom might attract my attention and would ask the teacher about it, but nothing more than that. But maybe students would; so I decided to put my colleague’s claim to the test.

The next day I asked students from four of my Spanish classes (2 year 8 and 2 year 9 groups), immediately after creating a padlet wall to which all of them had contributed, to note down in their books or iPads three language items they found in their classmates’ writing which they found interesting and/or useful. Two weeks later I asked them to recall the items they had noted down. Guess what? Only two of the 68 students involved in this experiment actually remembered something (one item each). Not surprising. Even when a student does notice something in another learner’s output they must do something with it for some degree of deep processing to occur and retention to ensue.

I also asked my students how often they actually look at what their classmates write and ‘steal’ useful language items from them. The vast majority of them replied ‘very rarely’ or ‘never’. Yet another mismatch between teacher presumptions and foreign language learner cognition.

The lesson to be learnt: not much is known as yet about the way adolescent foreign language learners interact cognitively and affectively with many of the Apps currently used in many MFL classrooms. ‘Experiments’ like mine should be carried out as often as possible in order for teachers not to be misguided by Internet myths. Padlet can be a useful App, no doubt; but assumptions about what learners do with it, should be rooted in evidence not simply wishful thinking.

Five useful things many MFL teachers don’t do


  1. Teach word structural analysis

As I argued in my previous post ‘Why foreign language teachers may want to rethink their approach to reading instruction’, effective reading comprehension results from the successful use of Top-down and Bottom-up processing. In that post I did not provide an exhaustive list of all the bottom-up comprehension strategies L2 teachers can model to their students, but I did underscore the importance of enhancing learner word recognition skills.

One strategy that can be taught to our students in order to enhance their chances to comprehend unfamiliar words is a powerful strategy called Structural Analysis which is not often taught in the typical mainstream UK classroom and has literally ‘saved my life’ in many situations.

This consists in training students in analyzing the morphology of the unfamiliar words they encounter in the written input they are exposed to by dividing them into parts, i.e. separating the root word from its prefixes and suffixes. Instruction in Structural analysis will include teaching

  1. what the most common prefixes in the target language mean (see: for fairly comprehensive lists of prefixes meaning in English);
  2. what the most common suffixes are and mean (see: for a very exhaustive list of suffixes and prefixes in French); and,
  3. with the help of web-based resources, the most useful root words and how to use them in order to infer the meaning of unknown lexis (e.g. ).

Learner training in Structural analysis may be carried out using the following framework:

Step 1 – Rationale for the training (i.e. to enhance students’ ability to comprehend unknown words)

Step 2 –  Raising awareness of what Prefixes, Affixes and Root words are;

Step 3 – Modelling strategy use (e.g. through think-aloud: teacher shows examples of how applying one’s knowledge of a prefix/suffix/root word as well as the context one can correctly infer the meaning of a word)

Step 4 – Extensive word-meaning-inference practice with written (or even spoken texts) preferably in the context of the unit-of-work topics. Also, the target words should not occur in isolation but in short texts;

Step 5 – Recycling the same strategy in three or four subsequent lessons in the context of 10 -15 minutes word-meaning-inference activities relevant to the topic-in-hand;

This kind of activities develop important compensation strategies, which are essential life-long language learning skills; they involve a high degree of creativity thereby tapping in high-order thinking skills. They may also have the very positive effect of enhancing our learners’ self-efficacy as readers as they will feel equipped with a new powerful tool for comprehending TL text without having to resort to the dictionary. This effect will only be brought about if the teacher plans the activities carefully, providing lots of initial ‘small wins’.

2. Train learners in oral compensation strategies

If the main aim of our language teaching is to develop our learners’ ability to cope with the linguistic demands of target language interaction, it should be imperative for us to train them regularly and systematically (not the one-off tip or session) in the effective use of compensation (communication) strategies – an important life-long survival skill. These strategies refer to the ways in which an individual creatively makes up for the expressive limitations of his target language competence (e.g. lack of vocabulary). Here are four useful compensation strategies to teach MFL learners:

  • Coinage – i.e. the individual makes up a word that does not exist using a related lexical item in their repertoire. For instance: a learner of French does not know the French verb ‘Nourrir’ (=to feed) but knows the noun ‘Nourriture’ (=‘Food’). He then makes up the word ‘Nourriturer’, which is wrong, but conveys the meaning;
  • Approximation – i.e. to go back to the above example, instead of saying ‘Nourrir’ the learner says ‘Donner de la nourriture’ (to give food), which is not exactly the best translation, but it is very close in meaning, conveys the idea and is acceptable French;
  • Paraphrasing – i.e. the students do not know a given word so they explain it, a bit like a dictionary definition would do;
  • Simplification – i.e. instead of using the same complex sentence structure he/she would use in his/her native language, the learner simplifies it so as to be able to convey the basic meaning.

The same framework outlined in the previous paragraph can be applied here, except that step 4 would include productive rather than receptive practice.

Compensation strategies are a very important skill and just like any other skill they must be practised extensively and regularly in order to be learnt. Some teachers may frown upon this type of instruction on the grounds that it may lead to sloppy L2 output or pidginization. Nonetheless, these strategies are powerful learning tools; for example when you produce a wrong but intelligible word through coinage, a native or expert TL interlocutor will understand you and usually provide you with the correct L2 form. This lexical transaction is more likely to lead to retention than looking up the same word in the dictionary.

3. Explain how the brain works

Foreign language learners, especially the more committed and metacognitive ones, do, in my experience, enjoy knowing more about how their brain works when they learn the target language. This does not mean that one should spend a whole lesson talking about neuroscience…However, showing them the diagram below, that maps out how the brain retains vocabulary over time and involving them in devising a personalised schedule based on that diagram would not take too much of your lesson time; would not require scientific jargon and would foster metacognition (thinking about and planning one’s learning).

With some of my groups I also venture in brief and concise explanations of (a) how Working Memory operates in an attempt to hammer home the importance of concentration, good pronunciation and associative strategies; (b) how forgetting is often cue-dependent; (c) why performance errors occur. Whenever one teaches about the above it is important to keep it as brief and visual as possible; not to over intellectualize and to relate the discussion to your students’ learning whilst showing them clearly why and how the knowledge you are imparting on them will benefit them in the short- and long-term.

4. Implement attitude-changing programs

In every class there are students who are less engaged, disaffected or even hostile. After a few pep-talks, disciplinary measures, referrals to pastoral middle-managers, things may get a bit better but there is rarely much change. This is often due to the fact that these scenarios are not handled through a structured, principled, well though-out attitude change program.

It is beyond the scope of this post to delve into the ins and outs of attitude-changing-programs implementation; here it will suffice to point out how for any attitudinal change to work, it must start by identifying the issues at the root of the negative attitude one wants  to change vis-a-vis the following metacomponents of attitude identified by Zimbardo and Leippe (1991):

  • Behaviours (what students do in their daily approach to languages)
  • Intentions (what their short-/medium-/long-term objectives are with the language)
  • Cognitions (their beliefs about languages)
  • Affective responses (how much they enjoy language learning)

The picture below (from:  illustrates where the most positive and committed of our students usually are in terms of these four attitudinal components:

The principles I outlined in my post on motivation (“Eight motivational theories and their implications for the classroom”) can then be applied to address the four areas one by one. For instance, one may want to start by inducing a level of cognitive dissonance to address learner beliefs (cognitions); by identifying what students excel at and enjoy so as to cater for their preferences in lessons and enhance their sense of fulfillment and enjoyment; to set short-term manageable goals in order to increase their sense of self-efficacy; etc.

Of course, most teachers are busy and over worked and may object that they do not have the time to do all this. My main point here is that, if one does wish to turn around a disaffected student or group of students, one should at least first try to ascertain what the actual roots of the problem are, through a structured and deep inquiry process based on the above framework.

5. Ask older language students to observe you

We often have our colleagues or senior teachers observe us. A recent trend in some schools in the UK is also to have students to observe us using checklists where they tick or cross things they see us do or not do. A strategy I have used in the past and that has paid huge dividends was to ask older A2 language students ( around 18 years old) to observe one or more lessons of mine and give me feedback on one or more specific areas of my teaching in which a younger person’s perspective may be more useful than an adult’s, e.g. motivation, empathy towards students, levels of pupils’ engagement, etc. What I find useful about having an older language student observe me is that students of this age are more in sync with adolescent-learner mentality and affective responses than my colleague’s whilst being still fairly mature and cognizant of what language learning entails; moreover, being former students of mine, these individuals are usually more able to relate the observed students’ experiences to their own when they were indeed taught by me and provide even more useful feedback as a result.

The best lesson planning ever…really? – Of the perils of educational sensationalism

As I was ‘snooping around’ the Internet I bumped into a title that attracted my attention: “9 ways to plan transformational lessons – Planning the best curriculum unit ever” (on the Edutopia site) .  At first I thought the author may have been a tad arrogant but when I read that the article had been shared 4.9K times on various social media I said to myself “Let’s read this”. And I did. After reading I felt immediately compelled to write this post as I got worried:  were the language teachers amongst the 4.9 K people who shared that article and/or the people who shared them with going to do what the article advises in the belief they will, by so doing, plan the best lessons ever?

The following are the points in the article that I had the most issues with:

  1. A… ‘transformational’ lesson??!!

This was the first – but not the greatest of my worries. Transparent learning; Visible learning; Dynamic learning; Collaborative learning; Positive education; and now…Transformational learning? For real? Another label thrown upon teachers by educational consultants wanting to create a brand? Yet another trendy category that if you do not belong to you are not up-to-date and feel left out? EVERY SINGLE LESSON where a student learns something you taught them is a transformational lesson, as you add new cognition to their brains. Even if you do not do any fancy stuff, any backward design or pedantically follow pseudo-scientific learning rubrics.

  1. For the best lessons ever shift from solo to collaborative design…really?

Whilst I do enjoy working collaboratively with my colleagues, especially doing long-term planning with them (e.g. preparing a unit of work), I am definitely at odds with the statement that in order to plan the best lessons ever I need to plan them with another teacher. When I plan my lessons I have every single student of mine in mind and the vocabulary and grammar structures they learnt with me in our last lesson as well as the mistakes and problem areas I want to address in their output; I know what strategies and learning activities keep them focused and motivated; my colleagues would not know that. Also, not all of my colleagues share the same theory or approach to language learning as me and often use or sequence the very same activities differently.

  1. Create the assessment before developing content… the Wiggins and Tighe curse

There is nothing less educational in language teaching than the adoption of Wiggins and Tighe’s backward design approach to MFL lesson planning – whist I can see its merits in the planning of longer units of work. There is nothing more straight-jacketing than teaching a language lesson with the assessment in mind. Whereas I do agree that one must have aspirational learning outcomes to work towards in a lesson and sequence of lessons, to set these in stone and let them drive the way we teach language is not only unethical, but counter-productive in terms of language acquisition; unless, that is, we are willing to sacrifice sound cognitive and psycholinguistic development in the name of a trendy curriculum design principle. Unless we teach robotically to our schemes of work and we are not willing to adapt our plans when we feel that our students need more practice.

A good L2-classroom practitioner teaches adaptively, creatively and should not teach with a specific test in mind; learning a language structure or function is a process which may need more time than planned. A good teacher must be prepared to change their plans if needs be. Teaching is not simply about planning and assessing.

  1. Don’t forget the introverts…

Quoting work by Susan Cain, the author points out how introverts enjoy working autonomously and how currently a lot of teaching includes group work. Is the implicit recommendation for the best lesson ever not to involve introverts in group-work? If it is so, I do not agree; whilst there has to be a balance of individual and collaborative work  in most lessons, I deliberately encourage students who do not enjoy working collaboratively to do team work so as to get them out of their comfort zone and learn a major lifelong skill; also, learning a language is about communicating with others!

Then the author goes on to say that increasing wait time to seven seconds – when asking a question in front of the classroom – will play to the strengths of introverts. Really? I give any of my students, regardless of their personality as long as they need unless they say ‘pass’… I would be interested to know why seven seconds is the ideal wait time as for some of my students it would not be enough, and I would rather wait a bit longer than dent their self-esteem.

Integrate Productive struggle in the curriculum

This is a direct quote from the article:

Don’t lower the expectations of your next lesson plan. Instead, scaffold instruction and check to see that you are challenging students appropriately with Hess’ Cognitive Rigor Matrix.

In other words, in order to plan progression and challenge in the best lesson ever, teachers ought to refer to Hess’ Cognitive Rigor Matrix one of many Bloom taxonomy surrogates… Speechless!

In conclusion, I apologize to the author of the article for sounding a bit too harsh. The article does contain some sound recommendations (e.g points 7 and 8); however, the harshness was elicited by the sensationalist title which clearly alludes to outstanding – the best ever – practice. When one is passionate as I am about teaching and learning one can only find it unacceptable and even ‘dangerous’ for the pedagogic recommendations made in this article to be divulged as ‘the best ever’ practice, especially when they are endorsed by a very authoritative educational website as Edutopia followed by a vast number of teachers from all over the world.

What does the message sent by the author and Edutopia to novice teachers about what best teaching practice is entail? A fixed wait time of seven seconds? Planning challenge in language learning based on Hess’ Matrix (please read my article on the Bloom Taxonomy and you will understand my ‘fury’)? Teaching a lesson based on a set-in-stone piece of assessment? Or that best lessons must be planned collaboratively? Whilst many of this recommendation can be useful, they should not be presented as evangelical truths, the pre-requisites for the best ever practice.

Edutopia should know better and put ethics before sensationalism.


The top 10 foreign language learning research areas MFL teachers are most interested in

images (5)

When I started writing my blog I did not expect I would get many readers. I started it much in the same spirit as one starts writing a journal; as a way to reflect on my practice and on my beliefs about language learning. When, however, people from all over the globe started to contact me requesting to know more about specific areas of foreign language acquisition and pedagogy and “Six useless things foreign language teachers do” got more than 9,000 readers in one night, it finally hit me: there are keen and reflective classroom practitioners out there who do want to know more about L2 acquisition theory, neuroscience and L2 research than what they were taught on their teacher training courses or during a few hours of CPD by educational consultants.

In actual fact, the reasons why I decided to embark on a Master’s course in Applied Linguistics after two years of UK comprehensive-school teaching refer mainly to my deep dissatisfaction with the way I was taught how to teach on my teacher training course (PGCE). My PGCE tutors were great, do not get me wrong. However, on my teacher training I was basically quickly shown a few tricks of the trade – mostly through videos or demo lessons with the ‘perfect’ classes – equipped with a few ‘lesson templates’ and sent off to my teaching practice schools.

No need to tell you what happened there; you have all been through that nerve-racking baptism of fire; you have all come across the really motivated and inspirational teachers who tried their best to support you and the demotivated and disgruntled ones who tried to discourage you. The outcome: I learnt more tricks of the trade, but not a shred of understanding of how the human brain acquires languages; not a hint of what research says sound MFL pedagogy looks like; I left my last teaching practice placement with glimpses, intuitions of what may work and what may not, totally unsupported by a theory of learning or research evidence.

Things did not improve much in my first school as an NQT. More trial and error; more tricks of the trade; more useless training sessions; still not a sound pedagogic framework, a theory that I could refer to, rooted in neuroscience. I had to use quite a lot of my savings to finally get that framework and that research-based knowledge from some of the greatest names in EFL pedagogy and research at the Reading University Centre for Applied Language Studies (CALS). Costly, but worth every penny!

Yet many of the colleagues I have worked with in 25 years of MFL teaching firmly believed that a good MFL teacher does not need to know about theory or research. I remember how one of them used to refer to the applied linguistics handbooks that I used to read during my frees as ‘silly books’. An understandable attitude considering how inaccessible many researchers have made their often very useful findings to classroom teachers with their convoluted jargon and obscure statistics.

A fairly recent survey carried out by my former PhD supervisor, Professor Macaro of Oxford University, however, found that although only 3% of the teachers interviewed found research accessible, 80% of them were actually interested in what research has to say about language acquisition and pedagogy. The following are the top ten areas of research Dr Macaro’s informants identified as most useful. Please note that the sample was not huge – only about 100 – and that the people who filled in the questionnaire were Heads of Department, i.e. very experienced teachers. The ranking is based on the means of the score each topic area received (1 being very useful and 4 being not at all useful):

  • Vocabulary acquisition74 % of the teachers found this topic very useful
  • How the grammar rules of the language are best learnt 73 % of the teachers found this topic very useful
  • Motivation68 % of the teachers found this topic very useful
  • How learners make progress with language learning58 % of the teachers found this topic very useful
  • Differences amongst learners (e.g. age; gender)53 % of the teachers found this topic very useful
  • Speaking51 % of the teachers found this topic very useful
  • How the brain stores and retrieves language58 % of the teachers found this topic very useful
  • KS4 (lower intermediate) research37 % of the teachers found this topic very useful
  • Writing 39 % of the teachers found this topic very useful
  • KS3 (beginner) research 29 % of the teachers found this topic very useful

What is interesting is the absence from the above list of two topics that I have written about and seem to have been very popular amongst my readers, i.e. listening and reading research. In fact, of Dr Macaro’s informants only 25 % were interested in listening and 27 % in reading.

What areas of research are you most interested in? I would love to have your input!