Why Changing Teaching Practice in MFL Is Hard – And What We Can Do About It

Introduction

In schools across the UK, curriculum leads and heads of department often find themselves asking:

“Why is it so hard to get teachers to change how they teach languages—even when the evidence points clearly to better alternatives?”

It’s a question that cuts to the heart of language education reform. But the difficulty isn’t rooted in unwillingness or apathy. The real reasons are far more complex—bound up in workload, identity, habits, inspection culture, and the invisible weight of a system that too often values compliance over curiosity (see Ball, 2003; Priestley et al., 2015).

When we look more closely at language teachers’ lived reality, it becomes clear that most resistance isn’t about not wanting to improve—it’s about the conditions that make meaningful change feel impractical, unsafe, or even demoralising (Fullan, 2007).

1. Language teachers are overwhelmed

In MFL departments, the workload is intense and often undervalued. Teachers juggle planning across multiple year groups and exam tiers, manage classes with wildly mixed prior attainment, and teach students who may arrive with minimal motivation or literacy support. Add to that the constant churn of curriculum reform, pressure to use the target language, and the simultaneous teaching of multiple skills, and you have a role already stretched to breaking point.

In this environment, change—even positive, research-informed change—can feel like another burden. As Hargreaves (1994) argues, reform efforts often fail not because teachers are change-resistant, but because reforms ignore the real pressures of day-to-day teaching.

2. The emotional cost of change

MFL teachers often build their practice through years of trial and error. Many rely on routines—translation starters, textbook sequences, grammar gap-fills—because these help manage behaviour and keep lessons running in challenging conditions. Asking teachers to swap these for sentence builders, interactive listening games, retrieval tasks or more adventurous infomation-gap activities can feel like asking them to relinquish strategies that once secured order.

This emotional dimension of teaching is well documented. Kelchtermans (2005) notes that pedagogical change is not just a technical shift but a disruption to a teacher’s professional identity. What’s at stake isn’t just a method—it’s a sense of competence, control, and personal history.

3. Fear of failure in a live, complex classroom

Languages are live performance subjects. Teachers must model speech, correct pronunciation, manage spontaneous responses, and keep energy levels high. If a new method—say, shifting from grammar instruction to communicative modelling—fails, the lesson can collapse.

This fear is compounded by MFL’s marginalised status in many schools. As noted by Pachler & Redondo (2014), the subject often lacks cultural capital among students and leadership. In such a fragile ecology, teachers feel they can’t risk a flop. As Lortie (1975) famously observed, teaching is inherently conservative because the cost of experimentation is high.

4. Initiative fatigue and disillusionment

Language departments have lived through it all: compulsory TL use, CLIL, triple marking, VAK theory, knowledge organisers. Many teachers have survived five or six waves of reform. So when a colleague suggests abandoning the textbook for EPI or task-based learning, the knee-jerk response is often scepticism—less because of the idea’s merits, more because of past disappointments.

This phenomenon, described by Datnow & Castellano (2000) as “reform weariness,” reflects a professional culture bruised by waves of transient initiatives that lacked follow-through or meaningful support.

5. The CPD problem in MFL

Too much CPD for MFL teachers is generic and abstract. Whole-school sessions on growth mindset don’t translate into better Year 9 French. Even when CPD is subject-specific, it’s often overly theoretical—e.g., a session on cognitive load theory without time to apply it.

Research supports this frustration. Cordingley et al. (2015) found that CPD is only effective when it is sustained, subject-specific, and includes opportunities for collaborative planning and classroom experimentation. Language teachers need space to rehearse strategies, co-construct resources, and see each other teach—not just be told what works.

6. Cultural inertia and departmental conformity

In many MFL teams, especially under pressure, a culture of conformity takes root. Teachers follow the SoW, use the prescribed textbook, test regularly, and rarely deviate. Trying something new, especially alone, can feel isolating.

This type of ‘cultural script’ (Stigler & Hiebert, 1999) powerfully shapes behaviour: what the group does becomes what’s seen as normal, safe, and acceptable. Teachers may privately disagree with current practice, but fear inconsistency, scrutiny, or being seen as ‘difficult.’

7. Systemic pressures work against innovation

The system rewards the visible and quantifiable: neat books, finished knowledge organisers, progress in 20-minute learning walks, rising data points. But the best MFL pedagogy—dialogic modelling, scaffolded speaking, lexical recycling—often doesn’t show up clearly in these snapshots.

As Biesta (2009) warns, an obsession with measurable outcomes risks sidelining what really matters: meaning-making, autonomy, and deep learning. Teachers thus face a painful trade-off: satisfy accountability or serve pedagogy.

8. Ofsted’s implicit syllabus rewards conformity

A serious block to innovation is Ofsted’s implicit preference for highly structured, linear, grammar-heavy curricula. Departments using alternative approaches—spiralled grammar, task-based learning, or lexicogrammatical recycling—risk being judged as lacking clear progression, even when learners demonstrate fluency, accuracy, and confidence.

This misalignment between what works and what’s rewarded has been criticised by scholars like Mitchell (2022) and highlighted in recent debates about the narrowing effect of inspection frameworks. As Sherrington (2020) notes, performative pressures can stifle innovation by rewarding surface-level compliance over substance.

What actually helps

Changing the status quo in an MFL department requires more than enthusiasm and evidence—it demands a carefully scaffolded, relational, and strategic approach. Below is a stepped, research-informed pathway for initiating and embedding sustainable change in a Modern Foreign Languages department:

1. Build Trust and Psychological Safety First

  • Cultivate a department culture where experimentation is safe and mistakes are seen as part of growth.
  • Avoid top-down imposition—change should feel collaborative, not mandated (Kelchtermans, 2005; Fullan, 2007).

2. Diagnose Before Prescribing

  • Conduct low-stakes professional dialogue: “What’s working for us? What’s not?”
  • Use teacher voice, student work, and classroom observations to understand current practice before proposing alternatives.
  • Use student voice if you are sure that the students are not happy with the current way language are taught. In settings where I have worked, this has provided a strong and effective rationale for change.

3. Share the Why, Not Just the What

  • Ground the change in clear, research-informed rationales (e.g., why retrieval beats re-teaching, why input matters more than output early on).
  • Use real student data, quotes, or examples to bring the evidence alive and make it feel relevant (Cordingley et al., 2015).

4. Start Small, Prototype Publicly

  • Pilot one change (e.g., sentence builders, oral fluency grids, or retrieval starters) with one year group or one class.
  • Encourage teachers to share what worked and what didn’t in a non-judgemental forum.

5. Model, Don’t Just Explain

  • Demonstrate the strategy live or through video: “Here’s how I do this retrieval routine.”
  • Break it down with scripting or worked examples—not just theory (Sherrington, 2020).

6. Co-construct Resources Together

  • Build materials as a team—sentence builders, speaking games, retrieval practice tasks, task sequences. Start by sharing your resources, especially the ones that have worked very well with your own students
  • This fosters ownership and deepens professional understanding through the process of creation

7. Create Regular, Low-Stakes Reflection Loops

  • Hold short, structured debriefs: “What did we try this week? What did we notice?”
  • Use a reflective log, shared Padlet, or brief oral check-ins to keep momentum.

8. Leverage In-House Champions

  • Identify early adopters or confident practitioners and let them lead mini-demonstrations or clinics.
  • Peer-to-peer credibility often trumps external CPD in changing practice.

9. Celebrate Progress Publicly

  • Showcase examples of success in departmental meetings, newsletters, or CPD slots.
  • Acknowledge even small wins: “This tweak helped 8Y speak for longer!”

10. Align with Whole-School Structures Where Possible

  • Tie in with curriculum intent statements, appraisal targets, or whole-school literacy to avoid friction.
  • Show that innovation can coexist with compliance—not oppose it.

11. Offer Ongoing, Contextualised CPD

  • Avoid one-off sessions. Provide sustained time for trial, feedback, and revision.
  • Prioritise live coaching, team teaching, and joint planning over slide decks (Cordingley et al., 2015; Joyce & Showers, 2002).

12. Revisit and Recalibrate

  • After a term or two, pause to ask: What’s embedding? What’s drifting? Why?
  • Use this review to adjust and deepen the change, not abandon it

Conclusion

Changing language teaching practice isn’t just a matter of better training or clearer evidence. It means addressing the deep-rooted structural, emotional, cultural, and systemic barriers that make change feel risky or futile.

If we want teachers to embrace new, research-informed methods, we need to create conditions where risk-taking is safe, support is practical, identity is respected, and innovation is genuinely rewarded. That won’t come from one-off CPD sessions or top-down mandates. It will come from courageous leadership, patient iteration, and communities of practice that put professional growth above performative metrics.

References

Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18(2), 215–228.
→ On how accountability cultures shape teachers’ behaviours and choices.

Cordingley, P., Bell, M., Isham, C., Evans, D., & Firth, A. (2015). Developing Great Teaching: Lessons from the international reviews into effective professional development. Teacher Development Trust.
→ A key review highlighting that CPD must be subject-specific, sustained, and collaborative.

Datnow, A., & Castellano, M. (2000). Teachers’ responses to success for all: How beliefs, experiences, and adaptations shape implementation. American Educational Research Journal, 37(3), 775–799.
→ On “reform fatigue” and why teachers resist new initiatives after repeated failures.

Fullan, M. (2007). The New Meaning of Educational Change (4th ed.). Routledge.
→ Classic reference on how change depends on the interplay of systemic, personal, and cultural forces.

Hargreaves, A. (1994). Changing Teachers, Changing Times: Teachers’ Work and Culture in the Postmodern Age. Cassell.
→ Explores why teachers’ practices are shaped by deeply embedded professional and institutional norms.

Kelchtermans, G. (2005). Teachers’ emotions in educational reforms: Self-understanding, vulnerable commitment and micropolitical literacy. Teaching and Teacher Education, 21(8), 995–1006.
→ Explains how change can threaten a teacher’s sense of identity and competence.

Lortie, D. C. (1975). Schoolteacher: A Sociological Study. University of Chicago Press.
→ Seminal work explaining the conservative nature of teaching as a profession.

Mitchell, R. (2022). Grammar teaching in modern foreign languages: Back to the future? In Pachler, N. & Redondo, A. (Eds.), Teaching Modern Foreign Languages in Secondary Schools (2nd ed.). Routledge.
→ Addresses tensions in current UK curriculum frameworks and grammar-heavy approaches.

Pachler, N., & Redondo, A. (2014). A Practical Guide to Teaching Foreign Languages in the Secondary School. Routledge.
→ Discusses marginalisation of MFL in schools and challenges in classroom practice.

Priestley, M., Biesta, G., & Robinson, S. (2015). Teacher agency: An ecological approach. Bloomsbury.
→ Frames teacher resistance and change through the lens of professional autonomy and context.

Sherrington, T. (2020). The learning rainforest fieldbook. John Catt Educational.
→ Explores the dangers of performativity and the importance of pedagogical substance over style.

Stigler, J. W., & Hiebert, J. (1999). The Teaching Gap: Best Ideas from the World’s Teachers for Improving Education in the Classroom. Free Press.
→ Describes how implicit teaching cultures shape what teachers do and don’t do in classrooms.

Making Phonics Work in Second Language Learning: What the Research Really Says

Phonics instruction is a hotly debated topic in the realm of language education. While it is well-established in first language (L1) literacy programmes, its role in second language (L2) acquisition remains less clear. Yet recent policy developments are making this debate increasingly urgent: phonics teaching is mandated by the current National Curriculum in England, and from 2026, the new GCSE Modern Foreign Languages specification will include a reading aloud component. This new emphasis places decoding—and with it, phonics—at the heart of formal language assessment for the first time in decades.

Before diving into the research, it’s important to clarify a key distinction: phonics involves the explicit teaching of the relationships between graphemes (letters) and phonemes (sounds), often through systematic decoding instruction. Phonological awareness, on the other hand, refers to a broader, more implicit sensitivity to the sound structure of language, including the ability to segment and manipulate sounds in spoken words. While related, these are not interchangeable. A learner might have phonological awareness (e.g., being able to hear that “bat” and “cat” rhyme) without being able to decode those words in print—a gap that phonics instruction specifically addresses.

One study of particular interest is Erler’s (2014) research on adolescent MFL learners in the UK, which identified a high prevalence of phonological dyslexia-like symptoms. Her participants displayed poor grapheme-phoneme awareness, inaccurate decoding of unfamiliar words, and difficulty retaining new vocabulary encountered in print. Crucially, Erler found that many of these learners lacked the foundational phonological processing skills that underpin effective phonics application, leading to a cycle of reading avoidance and vocabulary stagnation. This is especially problematic in contexts where vocabulary is primarily acquired through reading.

Additionally, although many students appear to be reading silently, they are typically subvocalising—mentally ‘sounding out’ the words. Without secure decoding skills, this internal voicing can become distorted or erroneous, undermining both comprehension and vocabulary retention.

Moreover, reading aloud plays a pivotal role in second language acquisition. It reinforces grapheme-phoneme correspondence, strengthens pronunciation, and improves oral fluency. Research into oral passage fluency (e.g., Rasinski, 2004) has shown that repeated reading aloud enhances not only decoding accuracy and speed, but also prosody—intonation, rhythm, and phrasing—which are essential for natural and fluent language use. There is also evidence that oral fluency improvements feed back into silent reading fluency by automating decoding, which frees up cognitive resources for comprehension and inference-making. For many L2 learners, reading aloud thus offers a vital bridge between passive recognition and confident, fluent use.

In this post, I synthesise key findings from recent studies, highlight their outcomes, and propose a practical instructional framework grounded in the research.

What Does the Research Say About Explicit Phonics Instruction?

Study & AuthorsKey FindingsOutcomesStrengthWeakness
Dennis Murphy Odo (2021)Meta-analysis of 46 studies with 3,841 L2 learners found moderate effects on word reading (g = 0.53) and strong effects on pseudoword reading (g = 1.51).Phonics significantly improves decoding, especially of novel words.Comprehensive data set across contexts.Heterogeneity in study designs and populations.
Yeh & Connell (2014) – Taiwanese EFL LearnersPhonics combined with decodable text outperformed phonics-alone instruction in long-term word reading retention.Integrating phonics with reading contexts increases retention.Good control group comparison.Limited to young learners in Taiwan; generalisability is low.
Zhao et al. (2023) – Chinese LearnersExplicit phonics enhanced decoding of unfamiliar English graphemes not present in Pinyin.Helpful for learners from non-alphabetic L1 backgrounds.Valuable for addressing orthographic transfer.Focused narrowly on specific grapheme types.
Wu (2010) – Adult ESL Learners (BYU)Extended explicit phonics instruction did not significantly improve word recognition in adult learners.Phonics alone may not be sufficient for adults.Focuses on older learners, often understudied.Suggests diminishing returns in adult populations.
Lervåg & Aukrust (2010) – Norwegian EFL LearnersFound a strong predictive relationship between early phonics skills and later reading comprehension in English.Early phonics skills lead to better long-term reading comprehension.Longitudinal study design adds robustness.Limited to one L1 background.
Tsou et al. (2006) – Taiwanese Primary LearnersPhonics instruction improved learners’ spelling and reading accuracy in English.Phonics boosts both decoding and encoding in L2.Balanced design including control group.Short intervention period.

What About Implicit Phonics Instruction?

Study & AuthorsKey FindingsOutcomesStrengthWeakness
Lee (2015) – Hong Kong ESL LearnersImplicit phonics instruction embedded in authentic reading improved reading confidence and proficiency among low-level learners.Implicit methods support motivation and fluency in weaker learners.Rich combination of qualitative and quantitative data.Case study approach limits generalisation.
Cunningham (1990) – Phonemic Awareness StudyExplicit instruction in phonemic awareness was more effective than implicit exposure for beginning readers.Metacognitive awareness improves early literacy outcomes.Early evidence on importance of instruction clarity.Older study; may not reflect current pedagogical realities.
Yildiz et al. (2013) – Turkish EFL LearnersStory-based implicit instruction improved students’ reading fluency over a phonics-only group.Implicit input supports fluency and engagement.Real-classroom setting improves ecological validity.No follow-up to assess long-term retention.
Choi & Zhang (2018) – Chinese University StudentsExtensive reading programmes with minimal phonics instruction boosted reading speed and comprehension.Implicit exposure through volume of reading can foster reading efficiency.Useful for higher proficiency learners.May not apply to beginners or younger learners.

Limitations in Current Research

Despite some encouraging results, the evidence base for phonics in L2 learning is still remarkably limited. There are simply too few robust, large-scale studies to allow for sound generalisations across learner types, age groups, and instructional settings. Many of the available studies focus exclusively on young learners, leaving the needs of adolescent and adult L2 learners largely underexplored. There’s also considerable variability in learners’ L1 backgrounds, intervention duration, and instructional designs, all of which muddy the waters when trying to draw clear, transferable conclusions. Longitudinal studies that track sustained impact over time are scarce, and comparisons between implicit and explicit methods—especially within the same population—are almost non-existent. In short, we need more targeted, nuanced research before we can confidently declare what works best, for whom, and under what conditions.

Lessons Learned and a Proposed Framework

From the research reviewed, several insights stand out:

  1. Explicit phonics instruction is most effective when systematic, contextualised, and paired with meaningful text.
  2. Implicit instruction, particularly when delivered through authentic reading, can support reading fluency and learner motivation, especially in weaker learners.
  3. The age, proficiency level, and L1 background of learners play a significant role in determining instructional effectiveness.

A Balanced Framework for L2 Phonics Instruction

Based on these insights, I propose the following phased framework:

  • Phase 1: Implicit Exposure – Use rich, meaningful texts to generate natural phoneme-grapheme awareness.
  • Phase 2: Explicit Instruction – Introduce phonics systematically, focusing on high-frequency and challenging correspondences.
  • Phase 3: Integrated Practice – Reinforce phonics through structured reading and writing tasks that recycle forms.
  • Phase 4: Monitoring and Feedback – Provide targeted support through ongoing assessment and error correction.

This framework aligns with the principles of Extensive Processing Instruction (EPI) by cycling input and output and supporting the proceduralisation of decoding skills through meaningful use (see picture below)

Figure 1Implicit and Explicit SSC (Sound-to-Spelling Correspondence) instruction as embedded in the MARSEARS sequence)

Conclusion

The question is not whether phonics instruction should be used in L2 classrooms, but how and when. Evidence suggests that both explicit and implicit approaches have merit, depending on the learner profile and instructional goals. Rather than treating phonics as a one-size-fits-all toolkit, we should see it as a flexible component of a broader, integrated literacy approach. And in light of the new GCSE reading aloud requirement, this isn’t just a pedagogical issue—it’s a matter of curriculum alignment and equity.

When learners can confidently sound out a new word, they own it. And when reading aloud becomes less of a performance and more of a practice, that’s when real fluency begins to form. We owe it to our learners to give them both the tools and the time to get there.

PLEASE NOTE: I am delivering a workshop on Phonics Instruction on 11th June 2025

Prime First, Explain Later: The Secret to Effortless Grammar Learning

Introduction

When we think of grammar instruction in language teaching, we often picture uninspiring g explanations of often complex rules, rule charts, conjugation drills, and learners furrowing their brows over verb endings. But what if the brain had a more natural, less painful way to absorb grammar—one that mirrors how we acquire our first language? Welcome to the world of syntactic priming, a subtle yet powerful phenomenon backed by psycholinguistic research and woven into the very fabric of my methodology: Extensive Processing Instruction (EPI).

In this article, I explore what syntactic priming is, how it works in the brain, and why it plays such a pivotal role in EPI. Most importantly, I make a case for delaying explicit grammar instruction until learners have deeply engaged with structures through listening, reading, speaking, and writing.

What Is Syntactic Priming?

Syntactic priming is the tendency to repeat a grammatical structure after being exposed to it. Imagine hearing someone say, “If I had time, I would go to the gym,” and then, without thinking, producing a similar sentence later: “If I had more money, I would travel more.” That’s syntactic priming in action. It happens subconsciously, but it significantly shapes how we process and produce language.

Researchers have shown that this effect is not just a linguistic quirk. It’s a cognitive mechanism that enhances fluency, encourages implicit learning, and lowers the cognitive cost of grammar production.

Key Studies in Syntactic Priming

StudyFindings
Bock (1986)Native speakers unconsciously repeat sentence structures (e.g., passives, datives) after exposure.
Pickering & Ferreira (2008)Syntactic priming is long-lasting and affects both comprehension and production.
McDonough & Mackey (2008)L2 learners mirror structures in interactive tasks, leading to more complex output.
Shin & Christianson (2009)Reading activities induce syntactic priming, supporting input-based teaching.
Jackson & Ruf (2017)Syntactic priming helps learners map form to function, aiding long-term acquisition.

How Syntactic Priming Works in the Brain

Our brains are hardwired to detect patterns. When learners hear or read a sentence with a certain structure, it creates a neural trace—a kind of mental echo. If they encounter the same structure repeatedly, it becomes easier to access and use. This explains why repeated exposure across different modalities (listening, reading, speaking, and writing) makes grammar feel “natural” even before it’s formally taught. If the repetition is gamified and interactive, the impact is of course likely to be even stronger.

This is not just theory. It’s grounded in decades of cognitive science suggesting that procedural memory (doing something) precedes declarative memory (knowing about something). In other words, we often use grammar before we can explain it.

How I Use Syntactic Priming in EPI

My EPI model is designed around the principle that structured input and output come first—and grammar comes later. Rather than frontloading grammar rules, I prime learners through a carefully sequenced series of tasks that cycle the same structure in multiple forms.

The activities I use across the four core skills—listening, reading, speaking, and writing—are specifically designed to expose learners repeatedly to the same grammatical structures in meaningful contexts. These activities not only scaffold language acquisition but also exploit the full potential of syntactic priming.

How EPI Leverages Syntactic Priming

PhaseEPI Activities (Examples)What It Does
ListeningNarrow listening (same structure, different contexts)
Spot the intruder (identify the sentence that doesn’t fit syntactically)
Sentence bingo (identify structures aurally)
Jigsaw listening (reconstruct texts from segments)
Builds subconscious familiarity with grammatical forms; facilitates form-function mapping
ReadingNarrow reading (parallel texts using same structure)
Jigsaw reading (reordering text blocks)
Spot the nonsense (identify syntactically flawed sentences)
Odd one out (identify structural intruders)
Deepens structural recognition and primes specific grammar through repetition and contrast
SpeakingSentence stealer (reusing peers’ syntactic patterns)
Pyramid translation (gradual collaborative translation)
Structured role play (scripted exchanges using target grammar)
Speed chat (short bursts using recycled syntax)
Encourages output using primed structures; reinforces fluency and accuracy through repetition
WritingGapped translation (fill in missing grammar in L1-to-L2 translation)
Narrow translation (convert L1 to structurally similar L2 texts)
Scaffolded writing (sentence builders and writing frames)
Dictogloss (rebuild a heard/read text)
Encourages structured output; consolidates primed forms through controlled and creative writing

Each of these activities ensures that learners are repeatedly exposed to and actively reusing the same grammatical structure, across modalities. This sequencing primes the syntax implicitly, creating a strong procedural foundation before grammar is ever introduced explicitly.

Why Grammar Works Better After Priming

So why wait to teach grammar? Wouldn’t it be more efficient to explain the rule first and then apply it? Surprisingly, the research—and my classroom experience—suggest otherwise.

Here’s why teaching grammar after syntactic priming works better:

The Case for Delayed Grammar Instruction

ReasonExplanation
Cognitive ReadinessLearners are more likely to absorb grammar when they’ve already encountered and used it subconsciously.
Boosts NoticingExposure helps students ‘notice the gap’, making grammar rules meaningful rather than abstract (Schmidt, 1990).
Reduces Cognitive LoadDelaying grammar prevents overwhelming working memory at early stages (Sweller, 1988).
Improves Output AccuracyLearners use structures more fluently after implicit exposure, even before they know the rules.
Reduces AnxietyGrammar becomes a friendly confirmation of success—not an intimidating hurdle.

This sequencing transforms grammar instruction from a battle of rules into an “aha!” moment of discovery.

Final Thoughts: Teaching with the Brain in Mind

Syntactic priming isn’t just a curiosity—it’s a classroom ally. It reminds us that language is first and foremost a habit, not a theory. My EPI approach embraces this, guiding learners through rich, meaningful exposure before naming the grammar they’ve absorbed.

Teaching grammar after priming doesn’t mean abandoning rules. It means timing them better—so they click, not clash. It’s a shift from “Let me explain this structure” to “Let’s see what you’ve already figured out.” And when we get the sequence right, grammar stops being a burden and starts becoming what it should be: a gateway to clarity, creativity, and confidence in another language.

I hope this dispels the myth that in EPI grammar is not taught. If anything, if the MARSEARS sequence is executed correctly, it is taught better. For more on teaching Grammar the EPI way, you may want to read this post.

Why Do So Many UK Students Drop Modern Foreign Languages?

In a country facing a rising language skills deficit, the dramatic drop in the number of students taking modern foreign languages at GCSE and A-level has become a matter of serious concern. Despite efforts like the English Baccalaureate (EBacc) to incentivise uptake, MFL remains one of the most frequently dropped subjects in the UK curriculum. But why?

Research spanning over a decade reveals that the problem is not unidimensional. It’s systemic, rooted in curriculum design, assessment practices, social factors as well as pedagogical traditions. Below are ten of the most widely evidenced reasons, followed by a research-informed summary table.

1. Perceived Difficulty

Students routinely identify languages as one of the hardest subjects, especially when compared to other GCSE or A-level options. The unfamiliar vocabulary, grammar rules, and listening/speaking components contribute to this perception. Many students feel overwhelmed and opt for subjects in which success appears more attainable. In many of my previous blogs and in my workshops I have tried to show that traditional teaching practices often contribute to this, especially the excessive reliance on explicit grammar instruction and overcharged curricula.

2. Lower Predicted Grades and Grading Severity

MFL subjects are among those with the harshest grade distributions. Students often achieve lower predicted or actual grades in languages than in other subjects. This discourages them from continuing with the subject, especially when university admissions and sixth-form entry hinge on strong academic performance.

3. Lack of Immediate Relevance

Many students fail to see the practical application of learning a foreign language. In a predominantly English-speaking society, languages are viewed as less relevant to daily life or future career plans, particularly when compared to STEM subjects or vocational courses that have clearer job pathways.

4. Limited Curriculum Time and Specialist Support

In many schools, particularly non-selective state schools, MFL has a reduced presence on the timetable. Cuts in lesson time, lack of qualified language teachers, and underfunding result in poor continuity, patchy provision, and limited support — all of which negatively affect motivation and attainment.

5. Peer Influence and Social Stigma

Peer pressure plays a major role in subject choices. MFL is sometimes seen as a “geeky” or “uncool” subject, and boys in particular may feel socially discouraged from taking it up. The perception that “nobody else is doing it” leads many to opt out regardless of personal interest or aptitude.

6. Teaching Approach and Curriculum Design

Traditional MFL instruction in the UK has long been criticised for its overemphasis on grammar, memorisation, and rote learning, rather than meaningful communication. When lessons are dominated by worksheets, verb tables, and repetitive drills, many students disengage — particularly lower-attaining learners.

7. Perceived Elitism and Lack of Inclusion

Languages are often associated with grammar schools, private schools, and higher-income families. Students in comprehensive schools — particularly those from disadvantaged backgrounds — are statistically less likely to continue with MFL, reinforcing the idea that languages are “not for people like me.”

8. Lack of Parental Encouragement and Role Models

In homes where parents don’t speak or value additional languages, learners are less likely to view language learning as important. The absence of visible multilingual role models in the community or media further contributes to low motivation and cultural disconnect.

9. Post-16 Curriculum Narrowing

The narrowing of subject options after age 16 means that students often prioritise core academic subjects or those with perceived higher value. Even when students enjoy MFL, the pressure to choose three A-levels typically leads them to drop languages in favour of subjects that better align with university or career ambitions.

10. Poor Transition from KS2 to KS3

The lack of consistency and continuity between primary and secondary language instruction is a major stumbling block. Students often restart from scratch in Year 7 regardless of prior learning, leading to frustration and the sense that their earlier efforts were pointless.

Summary Table: Why Students Drop Languages in the UK

ReasonExplanationKey Research / Source
1. Perceived DifficultyLanguages are seen as harder than other subjects.DfE (2019); Tinsley & Board (2017); Taylor & Marsden (2014)
2. Lower Predicted Grades / Grading SeverityLanguages yield lower grades, deterring students.Ofqual (2016); Routledge & Searle (2019)
3. Lack of Immediate RelevanceLearners struggle to see real-life applications.Tinsley & Doležal (2018); British Council (2022)
4. Limited Curriculum Time / SupportReduced lesson time and lack of specialist staff affect learning.ALL (2021); Ofsted (2016)
5. Peer Influence and Social StigmaSeen as “uncool,” particularly by boys.Courtney (2017); British Council (2020)
6. Teaching Approach / Curriculum DesignRote grammar learning over communicative use.Graham et al. (2016)
7. Perceived Elitism / Lack of InclusionMFL is skewed toward selective schools.Strand (2015); Hodgen et al. (2018)
8. Lack of Parental Encouragement / Role ModelsFewer multilingual influences at home.Murphy & Unwin (2019); Tinsley & Doležal (2018)
9. Post-16 Curriculum NarrowingFewer subject slots at A-level lead to early dropout.DfE (2017); Hodgen et al. (2018)
10. Poor Transition from KS2–KS3Gaps between primary and secondary learning disrupt continuity.Graham et al. (2021); Ofsted (2021)

What Now?

Reversing the trend will require more than just policy mandates. Schools need intelligent curriculum redesign, pedagogical innovation, and greater systemic support — especially in state schools serving disadvantaged communities. Unless these barriers are tackled head-on, the UK risks falling further behind in global language competence, limiting both individual opportunity and national competitiveness.

The decline is not inevitable. But the solution demands more than rhetoric — it demands reform.

What makes certain L2 learners better editors of their writing than others? – Part 1: Cognitive and affective factors identified by research

Why Some MFL Students Are Better at Monitoring Their Writing: A Research-Informed Guide for Teachers

In 2004 I completed my PhD investigation of L2 students Self-monitoring habits as essay writers. One of the most striking differences I observed during my study was that while some learners instinctively spot and fix their errors as they write, others barely notice a mistake even when it stares them in the face. Why is that? Why do some learners consistently revise, self-correct, and refine, while others either can’t or won’t? In this article, I explore what the research says about the cognitive and affective factors that affect learners’ ability to monitor their own output during writing. In a follow-up post I will explore the implications for teaching self-monitoring skills with specific reference to A2-B1 L2 learners (GCSE level in England).

What Makes Some Learners Better Self-Monitors? Key Research Insights

Working Memory Capacity Learners with greater working memory can retain and manipulate multiple elements at once — such as subject-verb agreement, spelling accuracy, and word order. This supports real-time monitoring. Studies by Miyake & Friedman (1998) and Robinson (2002) show that working memory correlates strongly with writing fluency and grammatical accuracy in L2 learners.

Metalinguistic Awareness Metalinguistic awareness refers to a learner’s ability to reflect on and manipulate the structural features of language. Roehr-Brackin (2018) found that learners with high metalinguistic ability are significantly better at identifying and correcting errors. Ellis (2004) also links it to increased success in rule-based tasks.

L2 Proficiency Level Beginners often struggle with output monitoring simply because their cognitive resources are consumed by word retrieval and syntactic construction. Ortega (2009) and Kormos (2012) argue that fluency allows learners to ‘free up’ mental space for revision.

Motivation and Attitude Learners who are more motivated to improve often engage more deeply in self-regulation, including self-monitoring. Dörnyei (2001) and Ushioda (2011) highlight the strong link between motivation and strategy use, especially in revision. Learners with a growth mindset are more likely to see error correction as a path to mastery rather than failure.

Strategy Use and Training Proficient self-monitors employ specific strategies: rereading, pausing to plan, and rephrasing. Research by Graham & Harris (2005) and Oxford (1990) confirms that explicit strategy instruction significantly improves writing outcomes

Can self-monitoring be taught?: What the Research Says About Monitoring Strategy Training

Over the past two decades, a growing body of research has highlighted the potential of explicit strategy training to improve learners’ ability to monitor their output in second language writing. While early SLA research focused largely on input, more recent studies have turned their attention to metacognition — particularly the kinds of strategies learners can be taught to deploy during written production.

One of the earliest and most cited contributions comes from Graham and Harris (2005), who developed a Self-Regulated Strategy Development (SRSD) model for writing. Their research, primarily with L1 learners but extended to L2 contexts, showed that when students were taught to plan, monitor, evaluate, and revise their writing using checklists and self-talk routines, their writing quality significantly improved.

In L2-specific contexts, Manchón and Roca de Larios (2007) investigated how learners approach self-monitoring during composition and found that students often do not spontaneously engage in rereading or revision unless explicitly taught to do so. Their study called for more structured integration of strategy instruction in writing curricula.

Schoonen et al. (2003) conducted research on L1 and L2 writers and found that cognitive fluency (speed and ease of lexical and syntactic retrieval) is a predictor of successful monitoring behaviour. Their findings imply that strategy instruction needs to be paired with fluency-building activities.

Sasaki (2000) carried out a longitudinal study of Japanese EFL learners and demonstrated that explicit instruction in writing strategies, including self-monitoring and planning, had a long-term positive impact on both writing fluency and grammatical accuracy.

More recently, Fernandez Dobao (2012) examined peer feedback as a scaffold for self-monitoring and found that learners who engaged in structured peer-review tasks were better able to internalise error detection strategies and apply them independently.

Lastly, Conti (2001) proposed a principled framework for teaching self-monitoring in L2 classrooms which included the following features:

(a) Enhancement of learner error-related metacognition

(b) A long-term process of self-monitoring

(c) Modelling of and extensive practice in the use of effective self-correction / editing strategies

(d) Personalisaton of error treatment

(e) Focus on the process rather than the product of writing and learning in general

(f) Synergistic use of various forms of EC

Like many other Explicit Strategy Training programmes his training consisted of the following phases (O’Malley and Chamot, 1990; Cohen, 1998; Macaro, 2001):

(1) Pre-test needs assessment: The learners’ needs are assessed, usually through a combination of different instruments (e.g. questionnaires, interviews, think-aloud protocols and other forms of self-reports) in order to strengthen the validity of the data.

(2) Introductory phase: The rationale for the training is given and the target strategies are presented and modelled.

(3) Scaffolding phase: The learners receive extensive practice in the target strategies with the help of “scaffolding”, i.e., activities and materials which remind and encourage the learners to apply the target strategies.

(4) Autonomous phase: The learners are left to their own devices without any intervention on the part of the teacher.

(5) Evaluative phase: The learners’ use of the target strategies and their impact on their performance are verified. Normally the same diagnostic instruments used at pre-test are re-deployed here.

Conclusion: Monitoring Is Learnable

Self-monitoring isn’t a fixed trait. It emerges from a complex interplay of cognitive factors (like working memory), affective factors (like motivation), and educational experiences (like strategy training and task design). Every learner can get better at it, but only if we teach it. Our job as MFL teachers is not to expect self-monitoring as a by-product of instruction, but to teach it as a skill in its own right. With time, modelling, and structured reflection — much like the strategies outlined in Conti (2001) — we can nurture that quiet, internal editor that turns output into real learning.

References

  • Dörnyei, Z. (2001). Teaching and Researching Motivation. Harlow: Pearson Education.
  • Ellis, R. (2004). The Study of Second Language Acquisition. Oxford: Oxford University Press.
  • Fernandez Dobao, A. (2012). Collaborative writing tasks in the L2 classroom: Comparing group, pair, and individual work. Journal of Second Language Writing, 21(1), 40–58.
  • Graham, S., & Harris, K. R. (2005). Improving the writing performance of young struggling writers: The Self-Regulated Strategy Development model. Teaching Exceptional Children, 38(1), 19–27.
  • Grabe, W., & Zhang, C. (2013). Reading and writing together: A critical component of English for academic purposes teaching and learning. TESOL Journal, 4(1), 9–24.
  • Kormos, J. (2012). The role of individual differences in L2 writing. Journal of Second Language Writing, 21(4), 390–403.
  • Manchón, R. M., & Roca de Larios, J. (2007). Writing-to-learn in instructed language learning contexts. Journal of Second Language Writing, 16(4), 225–250.
  • Miyake, A., & Friedman, N. P. (1998). Individual differences in second language proficiency: Working memory as language aptitude. In A. F. Healy & L. E. Bourne Jr. (Eds.), Foreign Language Learning: Psycholinguistic Studies on Training and Retention (pp. 339–364). Mahwah, NJ: Erlbaum.
  • Ortega, L. (2009). Understanding Second Language Acquisition. London: Hodder Education.
  • Oxford, R. L. (1990). Language Learning Strategies: What Every Teacher Should Know. New York: Newbury House.
  • Sasaki, M. (2000). Toward an empirical model of EFL writing processes: An exploratory study. Journal of Second Language Writing, 9(3), 259–291.
  • Schoonen, R., van Gelderen, A., de Glopper, K., Hulstijn, J., Simis, A., Snellings, P., & Stevenson, M. (2003). First language and second language writing: The role of linguistic knowledge, speed of processing, and metacognitive knowledge. Language Learning, 53(1), 165–202.

10 Research-Backed Principles for Effective Grammar Assessment in MFL Classrooms

Introduction


When it comes to assessing grammar in a Modern Foreign Language (MFL) classroom, especially with beginner to intermediate learners, it should be much more than just ticking boxes. Grammar learning is messy, gradual, and happens best through real use—not through artificial drills. Good grammar assessments need to reflect how languages are actually acquired. After all, in real communication, we use grammar to tell stories, explain ideas, argue our points—not just to get full marks on a worksheet. In this article, drawing on some of the most trusted research in second language acquisition (SLA) and cognitive psychology, we’ll walk through 10 key principles that can help make grammar assessment in MFL classrooms more meaningful, motivating, and effective.

1. Focus on Meaningful Use


Grammar should never be an end in itself. It’s a tool to communicate. So, assessments should check if students can use grammar meaningfully. For example, instead of giving students random sentences to correct, why not ask them to write a blog post about their weekend, naturally applying the passé composé (Samedi, je suis allé au cinéma). Assessments should make grammar “task-essential,” meaning students can’t succeed without using it. In other words, the task must naturally require the target structure for successful completion. For instance, a beginner Spanish activity might ask students to describe their family members using simple adjectives (Mi madre es simpática; Mi padre es alto), thereby assessing adjective-noun agreement authentically. When tasks demand real application of grammar for communicative purposes, learners are more likely to process the form deeply and retain it.

2. Prioritise Core Structures and 3. Put mastery before breadth


Let’s be realistic: some grammar structures are way more useful than others. Assessment should zoom in on high-frequency, high-utility structures—the ones students will actually need in everyday life. Think present tense -er verbs, basic negatives (ne…pas), and simple prepositions. Nation (2001) argues that focusing on what students use most pays off much more than chasing rarer forms. Focus on a handful of non-negotiables every year, ensuring that you don’t lose sight of them and keep assessing them until you believe they have become fluent in their deployment. Do bear in mind that the acquisition of a grammar structure may take months or even years; hence, assessing the acquisition of fewer foundational structures several times a year is more likely to pay higher dividends then assessing a multitude of structures (a la NCELP/LDP). The former approach will ensure that mastery of key grammar structures is attained and retained in the long term, so that you don’t have to keep re-teaching them ; the latter will obtain short-term gains that are likely to fade away in the space of a few weeks.

4. Assess Receptive and Productive Knowledge


Grammar competence usually shows up first in receptive skills like listening and reading before becoming visible in speaking and writing. That’s why it’s so important to assess both. For example, you could combine a listening task where students spot tense errors with a writing task where they narrate a past event. Larsen-Freeman (2015) and Hulstijn (2001) highlight that this sequence—recognising first, producing later—is how grammar naturally develops. Learners often notice forms months before they can reliably produce them. By embedding both recognition and production in assessments, we mirror this developmental path and avoid penalising students unfairly for being in the early phases of acquisition. Sadly, in many MFL classrooms this very important point is flouted and grammar assessments tend to be exclusively productive in nature.

5. Reward Emerging Accuracy and Scaffold Risk-Taking


Language learning is a messy, step-by-step process. Full accuracy doesn’t happen overnight! That’s why assessments should reward partial success. If a student writes je parl avec mon ami, we should acknowledge the correct stem, even if the ending is missing. Giving credit where it’s due builds confidence and encourages students to take risks with new structures, which is crucial for growth (Pienemann, 1998; Ellis, 2008; Dörnyei, 2005). Risk-taking is essential because learners often avoid using newly learned forms unless they feel safe experimenting. A reward system that notices approximate success gives learners emotional permission to stretch themselves beyond their current comfort zone.

6. Vary Modes of Assessment


Real language use involves all four skills—listening, speaking, reading, and writing. So, grammar assessment should do the same, but always keep the same structure in focus. For example, all tasks could target the passé composé:


This varied approach also keeps students engaged and gives them different ways to show what they know (Dörnyei, 2009; Purpura, 2004). Exposure across skills also strengthens grammatical automaticity and reduces the reliance on slow, conscious monitoring.

7.Provide Immediate, Formative Feedback where possible (e.g. when preparing for high-stake examinations)


Students benefit massively from getting feedback while they’re still working things out. During mock speaking exams, for instance, teachers could gently recast errors: il allé au parc? → il est allé au parc? This kind of immediate feedback supports restructuring and stops mistakes from fossilising (Hattie & Timperley, 2007; Lyster & Ranta, 1997). It’s important that feedback be timely, encouraging, and specific, drawing learners’ attention not only to what went wrong but to how the correct form feels and sounds in authentic usage.

8.Track Progress Over Time


Grammar learning doesn’t follow a straight line. It zigzags, loops, and sometimes even goes backwards before moving forward again. One-off tests can’t tell the full story. Instead, keep “Grammar Growth Trackers” where students log milestones—like the first time they correctly used a tricky structure. This aligns better with the staged, dynamic way grammar develops (Pienemann, 1998; Ellis, 2008). Using trackers also empowers learners to take ownership of their grammatical progress, making growth visible and therefore more motivating.

This links up with what we said above about prioritising a set of non-negotiable structures every year to assess several times at spaced intervals in order to ensure that the students become fluent with them.

9. Ensure Validity of Scoring in Mixed-Skill Tasks


In translation tasks, it’s common for students to make vocabulary and grammar mistakes in the same sentence. To be fair, assessments should separate the two: mark grammar independently of vocabulary. If a student mistranslates a noun but uses the correct verb tense, they should still earn grammar marks. Clear rubrics are key here (Alderson, 2005; Purpura, 2004). Without such clarity, learners risk being penalised twice for a single idea error, leading to skewed assessment outcomes that demoralise rather than diagnose accurately.

10. Avoid Over-Reliance on Guessable Formats


Tasks like grammaticality judgment activities (“Is this sentence correct or incorrect?”) can seem quick and easy, but they come with a big downside: students have a 50/50 chance of guessing the right answer! That’s not a true test of knowledge. If you do use grammaticality judgment tasks as assessment tools, you can enhance their validity by asking the students to justify their answers and correct the mistakes in the target sentences where applicable.

To really assess what students know, it’s always good practice to mix comprehension tasks with production tasks that require active use (Alderson, 2005; Hulstijn, 2001). Production tasks (like rewriting or filling blanks contextually) force deeper retrieval and construction processes, giving a much clearer window into learners’ true grammatical competence.

Conclusion


Moving towards smarter, research-informed grammar assessment means making a few key shifts: focusing on meaning over isolated accuracy, prioritising frequent structures, rewarding effort and risk-taking, and tracking growth over time. It also means creating tasks that reflect real communication—and avoiding shortcuts that let students succeed through guessing. With these principles in mind, grammar assessment can become a powerful tool not just for measuring learning, but for building it.


Summary Table: Key Principles for Grammar Assessment

References

Alderson, J. C. (2005). Assessing Reading. Cambridge University Press.

Smith, S. & Conti, G. (2020). Memory: What Every Language Teacher Should Know. Independently Published.

Doughty, C., & Williams, J. (1998). Focus on Form in Classroom Second Language Acquisition. Cambridge University Press.

Dörnyei, Z. (2005). The Psychology of the Language Learner: Individual Differences in Second Language Acquisition. Routledge.

Dörnyei, Z. (2009). The L2 Motivational Self System. In Z. Dörnyei & E. Ushioda (Eds.), Motivation, Language Identity and the L2 Self (pp. 9–42). Multilingual Matters.

Ellis, R. (2005). Measuring Implicit and Explicit Knowledge of a Second Language: A Psychometric Study. Studies in Second Language Acquisition, 27(2), 141–172.

Ellis, R. (2008). The Study of Second Language Acquisition (2nd ed.). Oxford University Press.

Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), 81–112.

Hulstijn, J. H. (2001). Intentional and Incidental Second Language Vocabulary Learning: A Reappraisal of Elaboration, Rehearsal and Automaticity. In P. Robinson (Ed.), Cognition and Second Language Instruction (pp. 258–286). Cambridge University Press.

Larsen-Freeman, D. (2015). Research into Practice: Grammar Learning and Teaching. Language Teaching, 48(2), 263–280.

Lyster, R., & Ranta, L. (1997). Corrective Feedback and Learner Uptake: Negotiation of Form in Communicative Classrooms. Studies in Second Language Acquisition, 19(1), 37–66.

Marsden, E., & Kasprowicz, R. (2017). Foreign Language Practice Activities for Beginner Learners: A Comparison of Three Types of Activity. The Language Learning Journal, 45(1), 20–40.

Nation, I. S. P. (2001). Learning Vocabulary in Another Language. Cambridge University Press

What Really Matters in Vocabulary Acquisition? A Ranked Analysis of Key Influencing Factors

Introduction

As I often reiterate in my posts, vocabulary acquisition is the lifeblood of successful second language learning. It underpins listening comprehension, reading fluency, expressive speaking, and even grammatical development. Yet, while it is clear that acquiring a large and usable lexicon is essential, it is less obvious which factors most powerfully drive vocabulary growth.

Over decades, research in second language acquisition (SLA), cognitive psychology, and educational linguistics has revealed that some variables matter far more than others. This article ranks fourteen such variables by their likely impact on L2 vocabulary learning, based on accumulated empirical evidence. Along the way, it seeks to offer teachers, curriculum designers, and language learners themselves a roadmap to more efficient, strategic word acquisition.

1. Word Frequency

As discussed extensively in a previous post, word frequency is consistently identified as the most powerful predictor of successful vocabulary acquisition. Nation (2001) and Webb (2007) demonstrated that words occurring frequently in input are learned faster and remembered longer than low-frequency words, even when difficulty is controlled. Learners generally need at least 8–12 meaningful encounters to establish a stable mental representation of a word, and around 20–30 spaced repetitions to embed it into long-term memory.

However, not all exposures are equally useful: the diversity of contexts matters. Seeing se débrouiller (to cope, manage) in various settings like Il s’est bien débrouillé au travail, Je dois me débrouiller seul, and Comment te débrouilles-tu ? helps learners build rich semantic networks. Similarly, a word like chômage (unemployment) is better consolidated when used across structures like le taux de chômage, être en situation de chômage, and lutter contre le chômage.

Empirical studies show that with fewer than six spaced encounters, learners tend to remember fewer than 30% of new words after a week (Webb, 2007), whereas with 10 or more spaced encounters, recall rises above 80%. Frequency in input remains therefore a non-negotiable driver of vocabulary learning.

2. Quality of Vocabulary Instruction

Exposure alone is insufficient. The quality of vocabulary instruction dramatically modulates how efficiently and deeply words are acquired. Schmitt (2010) identified that explicit phonological modelling, semantic explanation, morphological decomposition, and repeated retrieval all significantly boost vocabulary retention compared to simple exposure.

Barcroft (2004) found that learners exposed to multimodal presentation — image, gesture, spelling, pronunciation, and sentence use — recalled 34% more words after two weeks than learners taught using isolated translation.

Meanwhile, Karpicke and Roediger (2008) demonstrated that retrieval practice — the act of recalling a word actively — produced 150% better long-term retention than re-reading or re-exposing learners to the word passively.

In practical terms, presenting épargner (to save money) by showing a photo of someone saving coins, miming the action, writing the word, pronouncing it carefully, and embedding it in J’épargne chaque mois pour mes vacances allows multiple memory routes to be built simultaneously, strengthening the trace exponentially.

Table 1 – Top 5 Effective Practices for Vocabulary Instruction

PracticeDescriptionKey Research Support
Pre-teaching VocabularyIntroducing and explaining key vocabulary before exposing learners to complex input tasks (e.g., texts, videos)Nation (2001), Graves (2006)
Multimodal PresentationEngaging visual, oral, physical (gestures) and textual channels to teach new wordsPaivio (1991), Mayer (2001)
Retrieval PracticeEncouraging effortful recall through quizzes, low-stakes tests, gapfills, and oral productionKang (2016), Bahrick (1979)
Rich and Varied ContextsRecycling words in multiple genres, syntactic frames, and communicative tasksWebb (2007), Schmitt (2010)
Spaced RepetitionRevisiting vocabulary over expanding intervals (e.g., 1 day, 3 days, 7 days, 14 days)Cepeda et al. (2006), Nation (2001)

3. Time Spent on Word Learning in the Classroom

A major body of research links time on vocabulary learning with larger vocabulary gains — but only when the time is cognitively engaged (Nagy & Townsend, 2012).

Spending as little as 10 additional minutes per session on structured, meaningful vocabulary activities can lead to 20–40% greater acquisition rates across a semester (Graves et al., 2011).

However, Pashler et al. (2007) and others stress that distributed retrieval is essential: learners must recall and use words actively at spaced intervals, not merely re-read them.

Whiteboard quizzes, retrieval flashcards, oral sentence building races, and quickfire memory games are among the classroom strategies that most efficiently drive vocabulary retention. Research indicates that when vocabulary practice involves active, varied retrieval and manipulation, retention rates can triple compared to passive exposure (Webb, 2007).

4. Word-Related Factors

Not all words are equally difficult to learn. Properties like concreteness, part of speech, morphological transparency, and polysemy make substantial differences.

Paivio’s (1991) dual-coding theory found that concrete words like bousculer (to jostle) or gronder (to scold) — easily visualised actions — are remembered more easily than abstract verbs like ressentir (to feel) or soutenir (to sustain/support emotionally).

Similarly, morphologically transparent words (prévoir — to foresee) are acquired faster than opaque forms like se méfier (to distrust), where no immediate morphology clues are available.

Polysemous words — like livre (book/pound) — add complexity: learners must develop multiple semantic traces to master full meaning usage, which research shows may require 20% more exposures on average (Crossley et al., 2009).

Table 2 – Word-related factors affective learnability

5. Time Spent by Learners Outside the Classroom

Outside engagement is critical. Nation (2001) and Webb (2007) found that learners who studied independently for even 10–15 minutes per day using structured activities gained 25–30% more vocabulary per term than their peers relying only on lessons.

Webb (2008) found that reading one graded reader (around 8,000–10,000 running words) resulted in incidental acquisition of 40–50 new words.

Listening to podcasts like InnerFrench with transcripts, maintaining a digital spaced repetition deck (e.g., The Language Gym or Anki), or writing short diary entries using newly learned words powerfully extend classroom learning into deep consolidation.

6. Learners’ Mastery of Word-Learning Strategies

Research consistently shows that learners who use conscious strategies — like creating mental imagery, using semantic grouping, deducing from context, and self-quizzing — outperform peers by margins as large as 50% (Gu & Johnson, 1996; Schmitt, 2000).

Explicit instruction and modelling of strategies (e.g., visualising lutter by imagining a boxing match, or mapping synonyms and antonyms) can dramatically enhance vocabulary depth and flexibility.

Learners must not only be taught words — but also taught how to learn words effectively.

Table 3 – Most researched word learning strategies

7. Learners’ Current Level of Proficiency

Higher proficiency enables deeper and faster vocabulary acquisition. Laufer (1998) found that intermediate learners could infer meaning from context approximately twice as accurately as beginners.

Advanced learners can make use of suffixes (-ment, -tion), prefixes (re-, dé-) and infer unknown words by analogy with known ones, making them far more efficient in incidental vocabulary learning.

Curriculum scaffolding should therefore carefully match input difficulty to learners’ inferencing abilities.

8. Learners’ L1 Vocabulary and Literacy

Strong L1 literacy correlates positively with L2 vocabulary acquisition.
Bilinguals or literate L1 learners more readily notice morphological patterns (e.g., entraide — mutual help, métissage — cultural mixing) and leverage semantic clues to decode new words.

Cummins’ (2000) Common Underlying Proficiency model suggests that cross-linguistic transfer of cognitive skills explains much of the variance in vocabulary learning success rates.

9. Learners’ Working Memory

Working memory capacity — particularly phonological memory — predicts the ability to rehearse and consolidate novel word forms.

Gathercole and Baddeley (1990) showed that phonological memory accounted for 40–50% of differences in new word learning rates among children and adults.
Training memory using rhythmical drills, chants, and repetition bursts significantly boosts early-stage vocabulary acquisition.

10. Learners’ Attitudes to Word Learning

Dörnyei (2005) emphasised that motivation was one of the strongest mediators of vocabulary acquisition: highly motivated learners devoted up to twice as much time to vocabulary practice outside class and retained words 25–30% better.

Simple motivational practices — setting achievable goals, celebrating milestones (e.g., Word Wizard awards), linking vocabulary to learners’ interests (e.g., sports, music, travel) — foster higher learner engagement and persistence.

11. Learners’ L1(s)

The influence of a learner’s first language (L1) on vocabulary acquisition depends largely on the degree of typological similarity between the L1 and the target language. Ringbom (2007) showed that when learners’ L1 and L2 share cognates or similar morphosyntactic structures, vocabulary transfer is facilitated, sometimes accounting for up to 25% faster acquisition during early learning stages.

However, when no such cognate scaffolding exists, as in the case of learners moving from a non-Indo-European L1 (e.g., Mandarin, Korean) to French, explicit vocabulary learning becomes significantly harder.
Thus, a Spanish learner may infer déjeuner (to have lunch) through analogy with desayuno, whereas a Japanese learner has no such linguistic support and must build word knowledge from scratch.

Nevertheless, studies (Lindqvist, 2009) demonstrate that cross-linguistic training — teaching students to consciously notice similarities and differences — can substantially reduce this gap, boosting the speed of acquisition even for distant-language learners.

12. Curriculum and Assessment Focus

Curriculum design and assessment philosophy exert profound effects on vocabulary learning trajectories. When curricula focus heavily on explicit vocabulary recycling, strategy training, and meaningful usage, learners’ gains in both receptive and productive vocabulary are markedly higher (Carlo et al., 2004).

Conversely, traditional curricula that prioritize memorisation of isolated word lists, followed by discrete-point testing (e.g., fill-the-blank items requiring only recall of definitions), tend to produce shallow and transient vocabulary knowledge (Schmitt, 2010).

Evidence from Stæhr (2009) shows that when assessments reward active, flexible usage of vocabulary (e.g., writing tasks, oral summaries), learners acquire and retain 20–30% more words compared to cohorts evaluated solely through isolated vocabulary tests.

Curriculum designers must therefore weave rich, usage-oriented vocabulary tasks into both teaching sequences and assessments to optimise acquisition.

13. Learners’ Socio-Economic Factors

Socio-economic status (SES) undeniably impacts early vocabulary development. Hart and Risley (1995) found that by age three, children from high-SES households had been exposed to around 30 million more words than children from lower-SES backgrounds, a gap that persists and compounds into adolescence.

However, in the context of second language learning, SES effects can be mitigated substantially.
Structured interventions focusing on explicit vocabulary teaching, extensive listening and reading programs, strategy instruction, and repeated retrieval practice have been shown to narrow vocabulary gaps within as little as two years (Stahl & Fairbanks, 1986).

Thus, while learners’ socio-economic backgrounds shape their starting points, principled teaching can dramatically alter their vocabulary growth trajectories.

14. Learners’ Age

Age affects vocabulary acquisition differently depending on the stage of learning and the nature of exposure.

Adults often demonstrate faster initial acquisition rates due to more developed working memory, greater metalinguistic awareness, and stronger inferencing skills (Singleton, 2001). For instance, older learners can explicitly deduce that entraîner and entraîneur share the root entraîne- and thus infer meanings more easily.

The greater working memory capacity, does give older learners (see Table 4 below) a significant advantage in formal,acquisition-poor environments. However, children who are immersed in a rich L2 environment from an early age often surpass adults in achieving native-like depth, automaticity, and nuance over time (Muñoz, 2006). They benefit from neural plasticity and may build implicit, proceduralised word knowledge more effectively if exposed early and intensively.

Thus, while age offers certain cognitive advantages or disadvantages depending on the stage and setting, it is the quality and quantity of input, engagement, and recycling that ultimately determine long-term vocabulary outcomes.

Table 4 – Ease of vocabulary learning as a function of agerelated working memory capacity

AgeTypical Working Memory CapacityEase of Vocabulary LearningResearch Evidence
8 years oldLimited (2–3 new lexical items reliably at once); phonological loop still developingStruggles to retain multiple unfamiliar words without very heavy scaffolding and multimodal reinforcement; benefits from visual aids, repetition, and chunkingGathercole & Baddeley (1990); Alloway et al. (2006)
11 years oldImproved (3–5 new lexical items at once); faster rehearsal speed; better chunking abilityCan retain moderate sets of new words (~4–6 words per exposure) if supported by strong context and phonological support; still vulnerable to overloadGathercole (1995); Service (1992)
16 years oldAdult-like capacity (5–7 new lexical items at once); strategic rehearsal and retrieval emergeCan learn larger vocabulary batches (~6–10 words) with limited scaffolding; benefits more from self-testing and morphological awareness strategiesBaddeley (2003); Ellis (2002); Nation (2001)

Conclusion

Vocabulary acquisition is a multidimensional process, shaped by a combination of word-related, learner-related, instructional, and environmental variables.
However, the factors do not carry equal weight.

Frequency of exposure, high-quality instruction, time-on-task, and deep, meaningful engagement with vocabulary emerge as the most powerful and controllable levers for promoting robust lexical growth.
While learner-specific factors like L1 background, working memory, age, and socio-economic context influence the rate and route of vocabulary learning, they are consistently secondary to the curriculum, pedagogy, and cognitive practices implemented.

For teachers, this ranking provides a clear roadmap: focus energy and time on ensuring rich input, principled recycling, frequent retrieval, strategy training, and motivation-enhancing practices.
By doing so, we dramatically increase our learners’ chances of building a powerful, flexible, and enduring lexicon — the foundation upon which true language mastery rests.

Summary table

The Impact of Listening-Strategy Instruction on Lower-Intermediate Students in Graham and Macaro (2008)

Introduction

As I gear up for my upcoming speaking tour—six cities across Australia over the next two weeks—I’ve been reflecting on the foundational studies that have shaped the way we teach and think about instructed second language acquisition. In doing so, I’ve returned once more, this time with particular focus – aural strategies instruction, to a study that has been quietly transformative in how we approach listening: Graham and Macaro (2008).

Why this study? Because listening comprehension, though undeniably central to language learning, is often the least explicitly taught of the four core skills. For beginner and lower-intermediate learners, the challenges are considerable: decoding fast speech, parsing unfamiliar vocabulary, coping with reduced forms, and making meaning with limited phonological awareness. These are compounded by low self-confidence and scarce exposure to naturalistic input.

In their 2008 paper, Graham and Macaro proposed a practical and well-structured model of listening strategy instruction tailored to the secondary school classroom. Their research showed that explicitly teaching students how to listen—rather than simply testing whether they had understood—led not only to improved comprehension outcomes but also to increased confidence and greater strategic awareness. The study’s findings have since been echoed in subsequent replications by other researchers, underscoring its potential.

But as any teacher will ask: is this model realistic in the daily grind of an overloaded classroom? Can its benefits be maintained when scaled down or adapted? And what does it actually look like in practice?

In this article, I explore these questions by offering a thorough summary of the study, unpacking its methodology and findings, and evaluating the feasibility of adapting its core principles into mainstream language teaching contexts.

Summary of Graham & Macaro (2008)

Graham and Macaro’s study investigated the effects of explicit strategy instruction on the listening skills of Year 9 French learners in the UK. Based on Vandergrift’s metacognitive framework, the researchers developed a 12-week programme in which learners were taught to plan, monitor, problem-solve, and evaluate their listening. Students were guided through a strategy cycle comprising pre-listening prediction, first listening for gist, post-listening strategy reflection, second listening with focused goals, and an evaluative phase.

Pre-Test Findings

Before the intervention, both the experimental and control groups completed a diagnostic listening comprehension assessment. The findings revealed that learners struggled particularly with understanding spoken French when confronted with longer texts, fast speech, or tasks requiring inference. The majority of students approached listening as a test of word recognition and expressed frustration when they could not understand every word. Metacognitive awareness, such as the ability to reflect on strategies or evaluate performance, was low across both groups, and few students reported using prediction, selective attention, or inference strategies. These pre-test results provided a strong rationale for introducing explicit strategy training.

Methodology

The study employed a quasi-experimental design with two matched groups: one receiving strategy-based instruction (experimental group) and one following traditional listening pedagogy (control group). Both groups consisted of Year 9 learners with comparable attainment levels.

The experimental group received 12 weeks of strategy instruction embedded in their regular listening lessons. Each session followed a structured cycle:

  1. Pre-listening: Students activated prior knowledge and predicted content based on titles and visuals.
  2. First listening: Students listened for gist without writing answers.
  3. Reflection: Learners discussed what they understood and the strategies they used.
  4. Second listening: Focused listening tasks aimed at identifying detail or specific language features.
  5. Post-listening: Learners evaluated their comprehension and reflected on the effectiveness of the strategies used.

The control group received the same listening texts and tasks but without explicit strategy teaching or metacognitive reflection. Assessments included pre- and post-tests of listening comprehension, a metacognitive awareness questionnaire, and follow-up interviews.

Materials Used

The materials for the intervention were carefully designed to be age-appropriate, linguistically accessible, and pedagogically structured. Audio recordings consisted of scripted and semi-authentic dialogues between native French speakers, selected to include a range of topics relevant to the learners’ syllabus, such as school life, leisure activities, and personal descriptions. Each audio track was accompanied by visual prompts (e.g., pictures, headlines, or comprehension grids) to aid prediction and focus attention.

Listening tasks were designed to assess both gist and detail comprehension and included true/false statements, multiple-choice items, and sequencing tasks. Additionally, learners were given printed worksheets that prompted pre-listening predictions and post-listening evaluations. These materials also embedded strategy prompts such as “What do you already know about this topic?” and “What helped you understand the second time?”—thereby reinforcing strategic habits.

Figure 1 – Listening reflection sheet (listening strategy checklist

Target Strategies Modelled

Throughout the intervention, teachers explicitly modelled and scaffolded the following strategies:

  • Prediction: Using context and visual prompts to anticipate content before listening.
  • Selective Attention: Focusing on key words and ignoring unimportant details.
  • Inference: Making educated guesses about unfamiliar language using known vocabulary and tone.
  • Monitoring: Checking for understanding and recognising when something doesn’t make sense.
  • Problem-solving: Using a combination of strategies to recover lost meaning or continue listening despite gaps.
  • Evaluation: Reflecting on how well a strategy worked and adjusting approach accordingly.

Each strategy was not only introduced and explained but also consistently revisited across lessons. Teachers modelled strategies using think-aloud techniques and then encouraged learners to articulate their own strategic thinking during reflection phases.

Results

The results showed measurable differences between the two groups. Students in the experimental group showed significantly greater improvement on post-listening comprehension tests than those in the control group. These students demonstrated gains in both gist and detail comprehension, as recorded through test scores and self-reports. They were also more likely to report applying strategies such as prediction, inferencing, and monitoring during listening tasks.

The metacognitive awareness questionnaire revealed measurable gains in the experimental group in areas such as planning before listening, evaluating after listening, and remaining focused despite difficulties. In contrast, the control group showed minimal change. Qualitative data from student interviews reinforced these findings: learners in the strategy group reported feeling better equipped to cope with difficult passages and were more willing to persevere when faced with difficult listening passages. Some students expressed a sense of increased satisfaction and motivation, though such statements should be interpreted cautiously given the self-reported nature of the data.

Feasibility in Real-Classroom Settings

Despite its strengths, replicating this model on a daily basis in the typical classroom is not straightforward. Unlike the tightly controlled and researcher-supported context of the original study, real classrooms often struggle with behavioural disruptions, student disengagement, time constraints, and teacher workload.

Teachers in mainstream settings typically teach multiple year groups, plan lessons across key stages, and are accountable for high-stakes exam results. Embedding multi-phase listening cycles and structured metacognitive reflection into every lesson would require a level of consistency and time that most timetables do not allow. Furthermore, students may lack the metacognitive maturity to engage meaningfully with strategy talk, particularly in schools where language learning is not strongly valued.

It is also important to consider researcher bias. Graham and Macaro were highly motivated to make their intervention work: the time and effort invested in design, delivery, and analysis needed to produce observable results to justify publication. This level of intensity and motivation is rarely transferable to busy school settings. While the intervention was undoubtedly effective, the conditions of its delivery were unusually favourable.

Effort vs Outcome: A Cost-Benefit Evaluation

When evaluating the cost-benefit balance of this intervention, the picture is mixed. On the one hand, the learning gains were statistically significant and pedagogically meaningful. On the other, the intervention was labour-intensive, and its full implementation is unlikely to be sustainable at scale. A more pragmatic approach is to identify core components of the strategy cycle that can be maintained regularly, while spacing out or simplifying the more demanding elements.

Scalable Listening Strategy Implementation Matrix

ComponentEffort RequiredImpact on LearningFeasibility in Busy ClassroomRecommended Frequency
Pre-listening prediction routineLowModerateHighEvery lesson (5 mins)
Multiple listening cycles (gist/detail)MediumHighMedium1–2x per week
Post-listening metacognitive discussionHighHighLowOccasional (after assessments)
Strategy checklist use (see figure 1 above)MediumModerateHighWeekly self-check
Teacher modelling of strategy useMediumHighMediumWeekly modelling
Student reflective logsHighModerateLowEnd of unit

Conclusion

Graham and Macaro’s 2008 study remains a cornerstone of strategy-based listening instruction, highlighting the value of teaching learners how to think about and manage their listening processes. Their 12-week intervention provided learners with tools that improved both performance and confidence, and encouraged greater listening autonomy. However, its full implementation is pedagogically ambitious and structurally difficult in many secondary schools. A strategic compromise is both necessary and desirable: teachers can adopt a reduced set of high-impact, low-effort routines that nurture listening autonomy without overwhelming their schedule. With careful planning and judicious adaptation, the core principles of metacognitive listening training can still yield meaningful gains—even under typical classroom constraints. The key is not to replicate the study in its entirety, but to extract and embed what works, sustainably.

A very important lesson to be learnt from this study, which confirms what I have observed from my own implementation of a metacognitive strategy-training programme as part of my PhD, is that teaching strategies is a very challenging, time-consuming and effortful process which must be sustained overtime in order for it to pay substantive dividends. Hence, if we want listening strategies to be taught effectively, we need to design a bespoke programme based on the needs of our students which must be delivered sistematically over a period of 3 months or even longer (my hunch is more like six months!).

References

Graham, S., & Macaro, E. (2008). Strategy instruction in listening for lower-intermediate learners of French. Language Learning, 58(4), 747–783.

Vandergrift, L. (1997). The strategies of second language (French) listeners: A descriptive study. Foreign Language Annals, 30(3), 387–409.

Vandergrift, L., Goh, C. C. M., Mareschal, C. J., & Tafaghodtari, M. H. (2006). The metacognitive awareness listening questionnaire: Development and validation. Language Learning, 56(3), 431–462.

Goh, C. C. M. (2000). A cognitive perspective on language learners’ listening comprehension problems. System, 28(1), 55–75.

Cross, J. (2011). Metacognitive instruction and second language listening: Theory, practice and research implications. TESOL Quarterly, 45(2), 269–279.

Using Confidence Rating Scales to Deepen Vocabulary and Grammar Learning in the MFL Classroom

Introduction: What if students could tell us not just what they know, but how sure they are about it?

In modern language classrooms, we work hard to help our students master vocabulary and grammar, but how often do we ask them how confident they are in their knowledge? Confidence rating scales are a simple but powerful way to do just that. By getting students to reflect on how certain they are when recalling a word or grammar rule, we give them the chance to take ownership of their learning. Even better, the research backs us up.

What are confidence rating scales?

Confidence rating scales are tools that ask students to rate how sure they are about an answer, a word, or a grammar structure. You might use a simple three-point scale:

  • 😊 I’m sure I know this
  • 😐 I think I know it, but I’m not sure
  • 😟 I don’t know this yet

Or you might go a little deeper with a 0–3 scale:

ScoreWhat it means
3I knew this instantly and confidently
2I remembered part of it with hesitation
1I needed help or guessed
0I didn’t recall it at all

Students use these after a quiz, during flashcard work, in retrieval grids, or even as part of dictation or translation tasks.

Why should we use them in MFL?

1. They promote real learning, not false confidence. Sometimes students get things right by chance or because it “looks familiar.” But they might not be able to use it fluently or accurately later. Confidence scores help them realise the difference between recognising a word and truly knowing it.

2. They improve memory. Cognitive scientists like Roediger and Butler (2011) have shown that the harder your brain has to work to recall something, the stronger the memory becomes. When students notice which words they’re shaky on and review them again later, they’re more likely to remember them long-term.

3. They support spaced retrieval. Instead of cramming, students revisit tricky words over time—especially the ones they’re not confident with. This helps beat the forgetting curve (Ebbinghaus) and means they’re more prepared for later assessments.

4. They build metacognition. Metacognition means thinking about your own thinking. When students reflect on how sure they are, they learn to study more strategically. Research shows this self-awareness makes learners more effective (Dunlosky & Metcalfe, 2009).

How can we use confidence ratings in practice?

Here are a few examples that are easy to implement in your classroom:

  • During vocabulary quizzes: After each answer, students rate their confidence—even if they got it right. This tells them which words to review again.
  • In vocab books or learning journals: Students can keep a running record of how confident they feel about new words, updating scores over time.
  • With retrieval grids or sentence builders: After completing a task, students mark confidence levels for each word or structure.
  • At revision time: Ask students to sort their vocabulary into three columns: “I’m confident,” “I need a little review,” and “I need to revise this thoroughly.”

What does the research say?

There’s a growing body of research supporting this approach:

  • Roediger & Butler (2011) found that retrieval combined with self-monitoring leads to better long-term retention.
  • Dunlosky & Metcalfe (2009) show that confidence ratings help students target what they really need to work on, reducing wasted time.
  • Pyc & Rawson (2009) proved that recalling difficult or low-confidence items leads to stronger memory than reviewing what’s already known.
  • Nation (2007) and Webb (2007) both emphasise that multiple exposures and reflection are key to deep vocabulary learning.

Final thoughts: A small change with a big impact

Confidence rating scales are easy to use, cost nothing, and take very little time. Yet they can dramatically improve how our students learn and revise. They also give us teachers valuable insight: not just what students got right, but what they’re unsure about—even if they’re not saying it out loud.

In the context of GCSE or KS3 MFL, especially with spaced retrieval cycles for vocabulary (like holidays, media, or school life), confidence ratings add a layer of depth. They help us and our students focus not just on performance, but on progress.

Want to try it? Start small: at the end of your next vocab lesson, ask students to rate how confident they feel about 10 key words. Use smiley faces, numbers, or thumbs up/down. Then revisit those low-confidence words in a week.

It’s a simple change. But in language learning, simple and consistent often wins the race.

References

  • Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences.
  • Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Sage Publications.
  • Pyc, M. A., & Rawson, K. A. (2009). Testing the retrieval effort hypothesis. Journal of Memory and Language.
  • Nation, I. S. P. (2007). The vocabulary learning strand.
  • Webb, S. (2007). The effects of receptive and productive vocabulary learning. Studies in Second Language Acquisition.

Cognates: their Power, Pitfalls and Untapped Potential and implications for the classroom

Introduction

Let’s face it: in the chaotic symphony of vocabulary lists, verb drills, and assessment pressure, cognates are the low-hanging fruit we often forget to pick. And yet, they’re the closest thing to linguistic gold we’ve got in the MFL classroom—a built-in bridge between English and French (or Spanish, or German) just waiting to be walked across.

Cognates are the words that look like English, sound like English, and—hallelujah—mean the same thing as English. Nation, communication, participer, pollution… You say them, and students get them. Or do they?

Because here’s the twist: while cognates can fast-track recognition, build confidence, and turbo-charge reading and listening skills, they can also trip learners up, mislead them, or lull them into false fluency. Not all that glitters is gold—especially if it turns out to be a false friend in disguise.

This article dives into the power, the pitfalls, and the untapped potential of cognates. We’ll unpack why they work, when they fail, and how to teach them with more than just a passing nod. You’ll get research, strategy grids, and even a few false friends to watch out for.

So whether you’re new to teaching or seasoned in the MFL trenches, read on: it’s time to give cognates the spotlight they deserve.

What Are Cognates, and Why Do We Use Them?

Cognates are words in two different languages that look or sound similar and mean the same thing. For example, information in English and information in French are cognates. Teachers love using cognates because they help learners feel successful early on. Up to 30–40% of words in French GCSE lists have a cognate in English, so it’s tempting to highlight them often.

Table 1 – Percentage of English cognates in French, Spanish, Italian and German

Psycholinguists say that when we see a cognate, both our first language (L1) and second language (L2) “light up” in the brain. This speeds up word recognition. In terms of brain science, this effect is tied to how bilinguals store and retrieve words across both languages. According to the BIA+ model (Dijkstra & Van Heuven, 2002), word recognition happens in a shared network that handles both languages simultaneously, allowing for what’s called non-selective lexical access. In short, the brain doesn’t “switch off” one language when using the other—it activates both. Brain imaging studies (van Hell & Tanner, 2012) confirm this, showing that learners process cognates with less effort, especially in reading tasks, due to overlapping neural representations in areas like the left inferior frontal gyrus.

What Makes Cognates Helpful?

a. They speed up understanding

When we see a familiar-looking word in a new language, our brain recognises it faster. Costa et al. (2000) found that learners were 30% faster at recognising cognates than non-cognates in word recognition tasks. This is because both language systems in the brain activate together, allowing a kind of shortcut through the brain’s semantic network.

b. They support memory

Because cognates activate both L1 and L2 memory at the same time, they are more likely to stick. This is called co-activation, and it helps learners retrieve the word more easily later. Connectionist models of language processing (like those developed by McClelland & Elman, 1986) suggest that overlapping nodes in a neural network strengthen memory links through repeated exposure, making cognates more retrievable than unrelated new words.

c. They create early success

In beginner classrooms, recognising a word like chocolat or télévision can boost students’ confidence. This can increase motivation and encourage learners to keep going. From a psycholinguistic point of view, this early success taps into the brain’s reward system, reinforcing continued effort through a feedback loop of positive emotional response and reduced cognitive load.

Why Cognates Are Not Always Easy: The Hidden Challenges

Despite these advantages, recognising and using cognates isn’t always as easy as it seems. First, students often fall for false friends—words that look the same but don’t mean the same. For instance, actuellement in French means currently, not actually. These “traps” can mislead learners and cause misunderstandings.

Table 2 – 15 tricky false friends in French

English WordFrench False FriendReal Meaning in French
ActuallyActuellementCurrently
LibraryLibrairieBookshop
CoinCoinCorner
SympatheticSympathiqueNice, friendly
CollegeCollègeSecondary school
AttendAssisterTo be present (not to help)
PretendPrétendreTo claim (not to act)
SensibleSenséSensible
NameNomName
FabricFabriqueFactory
DelayDélaiDeadline / time limit
InjuryInjureInsult
LargeLargeWide
ResumeRésumerTo summarise
EventuallyÉventuellementPossibly

Second, research shows that even true cognates aren’t always recognised correctly. A study by Otwinowska (2015) found that Polish learners of English only recognised 45% of potential cognates in a text, even at intermediate level. This shows that simply pointing them out once isn’t enough. Why is it? The answer is that even though certain cognates might look and sound familiar in French or Spanish, students will only successfully recognise them as “safe matches” if they’ve built up a strong, reinforced mental representation of the word in their first language (L1)and done so in the same modality (reading, listening, etc.). Let’s unpack this with research:

  1. Input modality matters:
    According to Carroll (2017) and Otwinowska (2015), learners are far more likely to correctly recognise a cognate in reading if they’ve encountered the English equivalent in print. Conversely, if they’ve mostly heard a word orally (e.g., through TV or conversation), they’re better at recognising the cognate in listening tasks. The brain’s phonological and orthographic pathways are somewhat separate in early L2 development.
  2. Frequency builds strength:
    In connectionist models of language acquisition (e.g. McClelland & Elman, 1986), word recognition relies on the strength of neural activation pathways, which grow through repetition and context diversity. If a learner has read “information” in ten different articles, the concept and form become tightly linked. When they see information in French, their brain is more likely to light up with the correct match.
  3. Cross-modal mismatch leads to errors:
    Learners who hear a word often but have never seen it written may not recognise a cognate when reading—and vice versa. This is especially problematic in French, where spelling-sound mismatches are common. For example, a student who hears “accident” on the news might not recognise accident in French reading tasks if they’ve never seen the English version written down.
  4. Shallow L1 familiarity = higher error rate:
    Otwinowska (2015) also showed that learners with lower L1 vocabulary depth had up to 40% lower cognate recognition rates, even when the words were formally transparent. In other words, even a perfect formal match doesn’t guarantee transfer—semantic familiarity is key.

Third, hearing cognates in spoken language is even harder. In listening tasks, differences in pronunciation often mask the similarity. Carroll (2017) found that learners recognised only 28% of oral cognates they had previously identified in writing.

This is because phonological input relies heavily on auditory processing, and the brain must match incoming sounds to stored word forms across two languages. This can be especially difficult if the sound patterns differ significantly from those in the learner’s L1. Neural studies (Proverbio et al., 2004) show increased activation in the anterior cingulate cortex—a region linked to conflict monitoring—when learners process false or unfamiliar cognates, especially under time pressure.

Why Cognate Training Needs Time and Structure

Many teachers assume that students will naturally spot cognates, but research into Strategy Training shows the opposite. Learners need structured, repeated practice in identifying and using cognates across different skills—especially listening.

Research indicates that metacognitive strategy training—such as learning how to guess meaning from context or sound out similarities—must be taught explicitly over several weeks. Without this, students revert to shallow guessing or ignore difficult words.

Gu & Johnson (1996) found that learners who received at least 6 weeks of metacognitive strategy training showed significantly greater vocabulary retention and inference accuracy—including the ability to use cognates appropriately.

Macaro (2001) emphasised that lexical inferencing (the process of guessing meaning, often via cognates) is not an innate skill and requires explicit modelling, repeated practice, and metacognitive reflection across at least 5–6 weeks.

Otwinowska (2015) reported that effective cognate identification required multimodal exposure (reading, listening, speaking) over time, with structured support, particularly for learners with lower L1-L2 language distance.

Table 3 – Recommended duration of a cognate-recognition training based on research

Focus AreaMinimum Effective DurationResearch Source
Written cognate recognition (reading)4–6 weeksGu & Johnson (1996); Otwinowska (2015)
Oral cognate recognition
(listening)
6–8 weeksCarroll (2017); Macaro (2001)
Cross-skill integration6+ weeksMacaro (2001); Nation (2007)

Note: Most MFL classrooms do not offer this kind of long-term structured training, especially at KS3 and KS4. Cognates are often mentioned in passing, but not practised systematically.

Conclusions

Cognates can be a powerful teaching tool—when used with intention. They offer early wins, speed up recognition, and make learners feel more confident. But they can also lead to confusion, mislearning, and overconfidence if used without care.

To make the most of cognates:

  • Teach the difference between true and false cognates
  • Train reading and listening strategies that help students spot cognates in speech. Listening with transcripts can be very effective in training students in oral recognition of cognates.
  • Use retrieval and metacognition (like confidence scales) to track which ones stick
  • Don’t assume students “just see them”—train them to do it

Used well, cognates help unlock the language. Used poorly, they can block real understanding.

Teachers cannot expect students to recognize cognates easily, especially under the highly demanding cognitive constraints posed by an exam, without any substantive and systematic formal training in cognate-recognition strategies.

Some key stats and research facts that every teacher should always bear in mind

  • Students recognise 30% cognates faster than non-cognates (Costa et al., 2000)
  • In practice, learners only recognise 45% of true cognates in texts (Otwinowska, 2015)
  • In listening, recognition drops to just 28% (Carroll, 2017)
  • A strategy-training programme in cognate recognition needs to last at least 6 weeks
  • Students from low socio-economic background are less likely to recognize cognates as these tend to be latinate words used in higher-register English

References (Selected)

  • Carroll, G. (2017). Cognate recognition in L2 listening. Journal of French Language Studies.
  • Costa, A., Caramazza, A., & Sebastián-Gallés, N. (2000). The cognate facilitation effect. Journal of Experimental Psychology.
  • Dijkstra, T., & Van Heuven, W. J. B. (2002). The architecture of the bilingual word recognition system: From identification to decision. Bilingualism: Language and Cognition.
  • Gu, Y., & Johnson, R. K. (1996). Vocabulary learning strategies and language learning outcomes. Language Learning.
  • Macaro, E. (2001). Learning strategies in foreign and second language classrooms.
  • McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Psychology.
  • Otwinowska, A. (2015). Cognate vocabulary in language acquisition and use. Multilingual Matters.
  • Proverbio, A. M., et al. (2004). Electrophysiological analysis of semantic processing in bilinguals. Psychophysiology.
  • van Hell, J. G., & Tanner, D. (2012). Neural activity during cognate processing in L2 learners. Language Learning.