by Gianfranco Conti, PhD. Co-author of 'The Language Teacher toolkit', 'Breaking the sound barrier: teaching learners how to listen', 'Memory: what every teacher should know' and of the 'Sentence Builders' book series. Winner of the 2015 TES best resource contributor award, founder and CEO of www.language-gym.com, co-founder of www.sentencebuilders.com and creator of the E.P.I. approach.
For the past decade or so, retrieval practice has enjoyed near-mythical status in education circles, often presented, somewhat uncritically, as a universal good : test more, retrieve more, and learning will automatically follow. And while the underlying principle is undoubtedly sound, recent research and, in my experience, classroom reality suggest that the picture is rather more nuanced than the slogan would have us believe.
Let us be clear from the outset: retrieval practice DOES work. A robust body of research shows that retrieving information strengthens memory more effectively than re-exposure alone (Roediger & Karpicke, 2006; Karpicke & Blunt, 2011), and in the context of vocabulary learning this has been repeatedly confirmed, with retrieval supporting long-term retention when used appropriately (Nation, 2013; Kang, 2016). However, what more recent syntheses in cognitive psychology and its application to SLA are beginning to emphasise is that retrieval is highly condition-dependent, and that when those conditions are not met, its benefits may be attenuated, or, in some cases, even reversed.
The following three variables, in particular, appear to be critical:
First, success rate: retrieval needs to be largely successful—typically in the region of 60–80%—for it to strengthen memory traces effectively, because repeated failure leads not to learning but to guesswork or disengagement (Karpicke, 2017; Rowland, 2014).
Second, prior encoding: the material must have been sufficiently processed before retrieval is attempted, since asking learners to retrieve poorly encoded information places excessive demands on working memory and often results in the reinforcement of incorrect hypotheses (Sweller, 2010; Baddeley, 2000). Hence the importance of input processing and effective elaborative rehearsal prior to retrieval, a tenet of the EPI methodology, which occurs in the MAR segment of the MARSEARS sequence.
Third, feedback quality: errors must be corrected promptly and clearly, otherwise retrieval risks consolidating inaccuracies rather than strengthening correct representations (Butler & Roediger, 2008).
And here, if one may pause for a moment, lies the pedagogical crux: in beginner and mixed-ability classroomwheres, learners’ lexical and grammatical representations are still fragile, and where processing capacity is easily overwhelmed, these conditions are not always present, and in my observation this is precisely when retrieval tasks become less a tool for consolidation and more an exercise in frustration; and is it not reasonable, therefore, to question whether what we are witnessing in such cases is retrieval at all, or merely the appearance of it?
From a cognitive load perspective, the issue becomes even clearer. Working memory is limited (Baddeley, 2000), and when learners are asked to retrieve language that has not yet been automatised, they must simultaneously search for forms, map them to meaning, and assemble them into coherent output. This operation, for novices, is often simply too demanding. Under such conditions, retrieval may not only fail to support learning but may actually impair encoding, because attentional resources are diverted away from processing input and towards struggling with output (Sweller, 2010; DeKeyser, 2007).
So what does this mean for classroom practice? Not, certainly, that retrieval should be abandoned—far from it—but rather that it should be repositioned within a carefully sequenced instructional framework, one in which learners are first provided with rich, comprehensible input, then guided through structured processing activities that stabilise form–meaning connections, and only then asked to retrieve and produce the language in question. In other words:
input → processing → rehearsal → retrieval
not the other way round, as often happens..
This is not a trivial adjustment. It represents a shift from viewing retrieval as the engine of learning to understanding it as a powerful consolidator of learning, effective precisely because it strengthens representations that are already in place, rather than creating them ex nihilo—an assumption which, though rarely stated explicitly, often underpins premature retrieval tasks.
In conclusion, retrieval practice remains one of the most valuable tools at our disposal, but like all powerful tools it must be used with precision; and if there is one lesson emerging from the most recent research—and, indeed, from careful classroom observation—it is that timing matters as much as technique, because learning is not simply a matter of doing the right thing, but of doing it at the right moment.
References
Baddeley, A. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences. Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition. Carpenter, S. K. (2022).Retrieval practice. In J. Dunlosky & K. A. Rawson (Eds.), The Cambridge handbook of cognition and education (2nd ed., pp. 347–369). Cambridge University Press. DeKeyser, R. (2007). Practice in a second language: Perspectives from applied linguistics and cognitive psychology. Kang, S. H. K. (2016). Spaced repetition promotes efficient and effective learning. Policy Insights from the Behavioral and Brain Sciences. Karpicke, J. D. (2017). Retrieval-based learning: A decade of progress. Nakatsukasa, K. (2023).Retrieval practice in second language vocabulary learning: A research synthesis. Language Teaching Research, 27(4), 987–1008. Nation, I. S. P. (2013). Learning vocabulary in another language. Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science. Rowland, C. A. (2014). The effect of testing versus restudy on retention: A meta-analytic review. Psychological Bulletin. Sweller, J. (2010). Cognitive load theory: Recent theoretical advances.
Sentence builders—when used as carefully engineered, input-rich, reusable frames rather than mere substitution tables—offer a powerful convergence of cognitive and affective benefits; and, in my experience, when they are embedded within a coherent input→processing→production sequence rather than deployed as isolated artefacts, they become not just helpful supports but central drivers of acquisition, even if this runs counter to the somewhat fashionable preference for less structured approaches.
Before proceeding, however, I strongly believe it is important—especially for those familiar with my work—to dispel a common misconception: in EPI, sentence builders are not the method. They are, rather, a tool amongst several used to model language, to encourage noticing of key language features, to scaffold elaborative rehearsal, and to assist initial production; and while they play a crucial role within the broader architecture, they must be understood as one cog in the MARSEARS sequence, not the engine itself.
Indeed, and this is something I have observed repeatedly in classrooms where EPI is implemented with fidelity, the language contained within a sentence builder does not constitute the totality of what learners encounter, because during the receptive phases learners are exposed to aural and written input through narrow listening and narrow reading, which extends, enriches, and consolidates the core language, thereby ensuring that the builder functions as a springboard rather than a boundary—and is it not precisely this interplay between structured input and guided output that drives acquisition forward?
Cognitive benefits
1. Reduces cognitive load, frees up processing capacity
Sentence builders externalise part of the linguistic load (lexis, morphology, word order), which—given the limits of working memory (Baddeley, 2000; Sweller, 2010)—allows learners to focus on meaning-making and message construction rather than retrieval from scratch, and in my observation this is precisely what enables lower-attaining learners to remain cognitively engaged rather than overwhelmed, especially when the linguistic demands would otherwise exceed their processing capacity within seconds.
Example Instead of generating: “I went to the cinema with my friends because…” from nothing, learners select and combine pre-structured chunks → less overload, more successful processing.
Research: Sweller (2010); Baddeley (2000); Paas & van Merriënboer (1994)
By aligning chunks clearly (e.g. je vais + infinitive), sentence builders help learners establish accurate form–meaning connections, reducing reliance on faulty heuristics (VanPatten, 2015), and—this is key, and often underestimated—preventing the kind of mislearning that occurs when learners construct meaning from partial or misleading cues.
Example je vais regarder consistently mapped to “I am going to watch” across multiple sentences → stable mapping.
Research: VanPatten (2015); Wong (2004)
3. Promotes chunking and automatisation
Frequent exposure to patterned language encourages chunking (Ellis, 2002), and with repeated retrieval, these chunks become automatised (DeKeyser, 2007), which in turn reduces processing time and increases fluency; and one might say, perhaps a little unfashionably, that learners begin to operate through formulaic sequences which, far from being a limitation, constitute the very substrate of fluent language use.
Example I would like to + infinitive + non or prepositional phrase becomes a single retrievable unit rather than assembled word by word.
Sentence builders lend themselves naturally to recycling across lessons, supporting spacing effects (Cepeda et al., 2006) and retrieval practice (Roediger & Karpicke, 2006), both of which are strongly linked to durable learning; and in my experience, when the same builder is revisited across modalities—listening, reading, speaking, writing—the gains are not merely additive but multiplicative.
Example Monday: comprehension Wednesday: structured speaking Friday: written adaptation
Research: Cepeda et al. (2006); Roediger & Karpicke (2006); Kang (2016)
They provide a bridge from receptive processing to controlled production, ensuring that output is not random but principled and success-oriented, which aligns with skill acquisition theory (DeKeyser, 2007), where guided practice precedes freer use; and in my observation, skipping this stage is one of the most common causes of fragile, error-prone production. They are valuable tools in the elaborative rehearsal phase which should always precede retrieval, i.e. the phase in which students carry out activities such as sentence puzzles, gap-fill, odd one out, word subsitution and other deep processing tasks, etc. with the assistance of sentence builders.
Example Learners produce accurate sentences before being asked to improvise.
Research: DeKeyser (2007); Anderson (1982)
6. Enhances noticing of patterns (lexicogrammar)
By visually organising language, sentence builders make patterns salient, supporting noticing (Schmidt, 1990) and helping learners internalise colligations and collocations—what Halliday would call lexicogrammar—even if, to the learner, this remains largely implicit and only gradually becomes explicit through repeated exposure.
Example Seeing play + sport (no article) vs play the + instrument repeatedly.
Sentence builders lower the affective filter (Krashen, 1985) by giving learners a clear pathway to success; and in my experience, even students who would normally remain silent are willing to attempt production when they feel they are not operating in a vacuum but within a supportive structure.
Research: Krashen (1985); Horwitz et al. (1986)
2. Increases willingness to communicate
Because learners are not starting from zero, they are more likely to take risks and speak, which is essential for developing fluency (MacIntyre et al., 1998), even if the language is initially scaffolded rather than spontaneous in the purist sense.
Research: MacIntyre et al. (1998); Dörnyei (2005)
3. Creates a sense of control and clarity
Ambiguity is reduced: learners know what is expected, what is possible, and how to succeed, which—though it may sound somewhat antiquated—restores a degree of didactic transparency often absent in less structured approaches. A recent research by Kate Trafford (here) found that the use of sentence builders was highly instrumental in tripling her GCSE uptake. The main reason for the students liking sentence builders was that their use make them more confident for the control and clarity they provide.
Stronger learners can extend and manipulate, weaker learners can follow and succeed, and in my observation this dual accessibility is one of the few ways to maintain both challenge and inclusivity without fragmenting the class into separate tracks.
Research: Tomlinson (2014); Vygotsky (1978)
5. Builds momentum and motivation through success
Success breeds motivation (Dörnyei, 2001), and sentence builders provide frequent, visible success moments which gradually reshape learner self-perception—I can’t do languages becomes I can actually say things, and that shift, once achieved, is remarkably powerful.
Research: Dörnyei (2001); Bandura (1997)
The perils of using sentence builders badly
Let us not pretend that sentence builders are a panacea, because in my experience they can be misused in ways that are not merely ineffective but actively counterproductive, particularly when teachers—often with the best of intentions—shortcut the processing phase and move too rapidly into retrieval or production, thereby forcing learners to produce language they have not yet sufficiently internalised.
Common pitfalls include:
Going to retrieval too soon → learners guess rather than retrieve
Overcrowded sentence builders → excessive options increase cognitive load rather than reduce it
Lack of recycling → structures remain inert and unautomatized
Overreliance → learners become dependent if scaffolds are not gradually withdrawn
Poor sequencing → jumping from exposure to free production without intermediate stages
And here one might reasonably ask: if a tool designed to reduce cognitive load ends up increasing it, have we not misunderstood its very purpose?
Sentence builders are often criticised on several grounds, and it is worth addressing these explicitly rather than dismissing them, because some contain a kernel of truth even if the conclusion drawn is, in my view, misguided.
“They limit creativity”
This criticism assumes that creativity precedes control, whereas the evidence suggests the opposite: learners require a threshold level of automatised language before meaningful creativity is possible (DeKeyser, 2007; Skehan, 1998); in my experience, asking beginners to be creative without sufficient linguistic resources results not in creativity but in silence or error. As mentioned above, the language contained within a sentence builder does not constitute the totality of what learners encounter, because during the receptive phases learners are exposed to aural and written input through narrow listening and narrow reading and any other texts the teachers deem useful, which extends, enriches, and consolidates the core language, thereby ensuring that the builder functions as a springboard rather than a boundary—and is it not precisely this interplay between structured input and guided output that drives acquisition forward?
“They encourage rote learning”
All learning involves some degree of memorisation, and the question is not whether learners memorise, but what they memorise and how it is processed; sentence builders, when used properly, promote meaningful repetition, not mindless parroting.
Research: Nation (2013); Baddeley (2000)
“They are not communicative”
On the contrary, they enable communication earlier, because they provide the linguistic means to express meaning, even if within constraints; and is it not preferable for learners to communicate accurately within a frame than inaccurately without one?
Research: Long (1996); Ellis (2003)
“Students become dependent on them”
Only if scaffolding is not withdrawn; properly used, sentence builders are gradually faded as learners internalise patterns, much like training wheels on a bicycle—useful initially, unnecessary later.
Research: Vygotsky (1978); Wood, Bruner & Ross (1976)
“They make learning boring”
Only if the teacher doesn’t know how to use them in an engaging way or always uses them the same way.
Conclusions
Sentence builders work because they sit at the intersection of cognitive efficiency and affective support, reducing overload while increasing confidence, stabilising form–meaning mappings while enabling structured production; and in my experience, when they are embedded within a principled instructional sequence—rich input, guided processing, scaffolded output, and gradual release—they do not constrain learners but rather enable them to say more, sooner, and with greater accuracy.
The real question, therefore, is not whether sentence builders should be used, but whether they are being used well, because the difference between effective scaffolding and counterproductive crutching lies not in the tool itself but in its implementation, and that, ultimately, is where professional judgement must prevail. This shines through very evidently in the above mentioned research by Kate Trafford which details a flawless and engaging implementation of EPI, in which the sentence builders are used with great ingenuity and creativity. The result, as she write in the conclusion of her MA dissertation:
“Pupils expressed with enthusiasm their opinions on the use of SBs, particularly in relation to visual organisation, accessibility and clarity of sentence structure. Their opinions directly reflect literature surrounding grammar being taught in context, enjoying being able to use the language to communicate as posited by CLT and the importance of lowering stress on the working and intermediate memory as per Krashen’s comprehensible input theory.”
References
Anderson, J. R. (1982). Acquisition of cognitive skill. Baddeley, A. (2000). The episodic buffer. Bandura, A. (1997). Self-efficacy. Bjork, R., & Bjork, E. (2011). Making things hard on yourself, but in a good way. Cepeda, N. et al. (2006). Distributed practice in verbal recall tasks. DeKeyser, R. (2007). Practice in a second language. Dörnyei, Z. (2001). Motivational strategies in the language classroom. Ellis, N. (2002). Frequency effects in language processing. Ellis, R. (2003). Task-based language learning. Halliday, M. (1994). An introduction to functional grammar. Kang, S. (2016). Spaced repetition promotes efficient learning. Krashen, S. (1985). The input hypothesis. Long, M. (1996). The role of the linguistic environment. MacIntyre, P. et al. (1998). Willingness to communicate. Nation, I. S. P. (2013). Learning vocabulary in another language. Paas, F., & van Merriënboer, J. (1994). Instructional control of cognitive load. Roediger, H., & Karpicke, J. (2006). Test-enhanced learning. Rosenshine, B. (2012). Principles of instruction. Schmidt, R. (1990). The role of consciousness in SLA. Skehan, P. (1998). A cognitive approach to language learning. Sweller, J. (2010). Cognitive load theory. Tomlinson, C. (2014). The differentiated classroom. VanPatten, B. (2015). Input processing. Vygotsky, L. (1978). Mind in society. Wray, A. (2002). Formulaic language and the lexicon.
If I had a pound for every time I’ve heard “you must stay in the target language 100% of the time”, I’d probably be writing this from a beach somewhere rather than my desk; and yet, in my experience, as appealing as such slogans may appear—succinct, emphatic, and seemingly grounded in common sense—they tend, upon closer inspection, to unravel under the weight of empirical scrutiny.
Let me be clear from the outset: maximising target language (TL) use is pedagogically sound, and in my observation this is borne out both in the literature and in the classroom. A substantial body of research shows that frequent, meaningful exposure to the TL is essential for acquisition (Krashen, 1985; Nation, 2013; Ellis, 2015), since learners require rich input, repeated encounters, and opportunities to process language in context, all of which, when orchestrated judiciously, contribute to the gradual entrenchment of form–meaning mappings. In my own work with the EPI approach, I would argue that input is the engine of acquisition—and therefore the TL must dominate classroom discourse.
However—and this is the crucial point—“maximising TL” is not the same as “TL-only at all costs.”
In beginner, mixed-ability classrooms where all learners share the same L1, rigid TL-only policies can, in my experience, overwhelm working memory, reduce comprehension, increase anxiety, and ultimately disengage a significant proportion of learners, particularly when the linguistic demands exceed their current processing capacity and when no mediating support is available to stabilise meaning; and if that is the case, one might reasonably ask, what exactly are we maximising—exposure or confusion? In such contexts, the L1 is not the enemy of acquisition; it is a cognitive tool—a scaffold, a shortcut, and, dare one say (to use a slightly antiquated turn of phrase), a propedeutic aid.
Let’s unpack this properly, with the research in hand.
1. Comprehensibility: The Non-Negotiable Condition for Learning
One of the most robust findings in SLA is that input must be understood to be useful, a principle so well-established that it borders on the axiomatic. Nation (2013) and Hu & Nation (2000) suggest that learners need to know around 95–98% of the words in a text to process it effectively, which, in my experience, is rarely the case in beginner mixed-ability classrooms unless the input has been very carefully engineered.
In beginner classrooms, TL-only instruction often pushes learners well below this threshold, with the predictable consequence that comprehension falters and processing becomes superficial, fragmented, and at times illusory, as learners cling to isolated lexical items while failing to construct a coherent representation of meaning. The result?
Guessing
Shallow processing
Cognitive overload
Disengagement
A brief L1 clarification—ten seconds, no more—can restore comprehensibility and allow learners to actually process the TL input that follows, thereby ensuring that subsequent exposure is not merely auditory noise but cognitively tractable material.
As Nation (2013) reminds us, time spent not understanding is time not learning—and, if one may be permitted a rhetorical flourish, what conceivable virtue is there in incomprehension?
2. Cognitive Load
According to Cognitive Load Theory (Sweller, 2010), working memory is severely limited—typically handling around 4–6 elements at a time (Baddeley, 2000)—a constraint which, though often acknowledged in theory, is in my observation routinely underestimated in practice, particularly when teachers attempt to sustain extended TL-only discourse with novice learners.
Beginner learners are already juggling:
New phonology
New vocabulary
New syntax
When we insist on delivering instructions and explanations exclusively in the TL, we often add extraneous cognitive load, that is to say, processing which does not directly contribute to schema construction but instead taxes limited attentional resources, thereby impeding rather than facilitating learning; and is it not somewhat paradoxical that, in striving to immerse learners in the TL, we may in fact be diminishing their capacity to process it?
The irony is stark: in trying to increase exposure, we may actually reduce effective processing.
A well-timed L1 explanation reduces this extraneous load, allowing learners to allocate their limited cognitive resources to what actually matters: processing the TL input—a point which, though seemingly self-evident, bears reiteration in the current climate of methodological absolutism.
3. Preventing Mislearning
VanPatten’s Input Processing theory (2015) shows that learners rely heavily on default strategies, such as prioritising content words and neglecting morphological markers, particularly when processing capacity is strained; and in my experience, this tendency is exacerbated in TL-only environments where meaning is insufficiently anchored.
And here’s the problem: 👉 Incorrectly learned language is harder to unlearn than correctly learned language is to acquire.
When comprehension is weak, learners generate incorrect form–meaning mappings, which, once established, may fossilise and resist subsequent correction, thereby creating a pedagogical impediment that is far more onerous to address than the initial misunderstanding.
Strategic L1 use can:
Clarify meaning precisely
Prevent faulty hypotheses
Stabilise early representations
In other words, it acts as a quality control mechanism for input processing—an intervention not of indulgence but of prudence.
4. Efficiency and Time-on-Task
Macaro (2001) and Turnbull & Dailey-O’Cain (2009) both highlight that judicious L1 use increases efficiency, a finding which aligns closely with my own observation that time squandered on protracted TL explanations often yields diminishing returns, particularly in heterogeneous classrooms.
A concept that takes two minutes to explain in the TL may take ten seconds in the L1, and while some may argue that those two minutes constitute valuable exposure, one must ask whether such exposure, if only partially understood, is in fact pedagogically efficacious.
This is not about laziness—it is about optimising instructional time, or, to employ a somewhat recherché expression, ensuring economy of cognitive effort.
Every minute saved on clarification is a minute gained for:
Input flood
Processing tasks
Retrieval practice
Ironically, less rigid TL use can result in more total TL exposure—a conclusion which, though counterintuitive to some, is well supported by both research and classroom praxis.
5. Affective Factors: Keeping Students in the Game
Krashen’s Affective Filter Hypothesis (1985) reminds us that anxiety, confusion, and low self-efficacy can block acquisition, and in my experience this is particularly salient in mixed-ability classrooms where weaker learners, unable to access meaning, may withdraw cognitively even if they remain physically present.
TL-only approaches often:
Benefit the top 20%
Marginalise the rest
Lower-attaining learners, unable to follow, disengage—not because they are unwilling, but because they are locked out of meaning, and once this disengagement sets in, it can be remarkably difficult to reverse, especially if learners begin to perceive the subject as impenetrable.
Research by Macaro (2005) and Levine (2011) shows that appropriate L1 use can reduce anxiety and increase participation, particularly in lower-proficiency learners, thereby lowering the affective filter and facilitating engagement with the TL input.
6. The Power of Cross-Linguistic Comparison
Ellis (2006) and Hall & Cook (2012) highlight the value of explicit comparison between L1 and L2, a process which, in my observation, not only enhances noticing but also accelerates the consolidation of form–meaning relationships, particularly when learners are guided to attend to subtle differences that might otherwise escape their attention.
The L1 provides a powerful reference point for:
Noticing differences
Avoiding negative transfer
Deepening understanding
Example:
Mi piace il calcio → “to me pleases football”
Without L1 mediation, this structure can remain opaque for a considerable period, whereas a brief contrastive explanation can render it immediately intelligible—mutatis mutandis, the same principle applies across numerous grammatical structures.
Used well, the L1 is not a crutch—it is a cognitive lever.
7. Code-Switching and the Bilingual Brain
From a psycholinguistic perspective, bilinguals do not “turn off” one language to use another; rather, research shows that both languages are co-activated in the brain (Kroll, Bobb & Wodniecka, 2006; Green & Abutalebi, 2013), a phenomenon which has significant implications for classroom practice, even if it is sometimes overlooked in methodological debates.
This means:
The L1 is always present
Learners naturally draw on it
Suppressing it entirely is neither realistic nor cognitively optimal
Strategic code-switching:
Reduces processing effort
Facilitates meaning-making
Supports lexical access
In other words, using the L1 aligns with how the brain actually works, and to ignore this is, arguably, to disregard a fundamental aspect of human cognition.
8. The Mixed-Ability Reality
Let us not indulge in pedagogical idealisation: in real classrooms, ability is not evenly distributed, and any approach that fails to account for this heterogeneity risks privileging a minority at the expense of the majority.
TL-only instruction often results in:
Stronger learners inferring meaning
Weaker learners copying or withdrawing
In my experience, strategic L1 use allows us to maintain inclusivity while preserving cognitive challenge, ensuring that all learners can access meaning without diluting the intellectual demands of the task, which, after all, remains firmly rooted in the TL.
So Where Does This Leave Us?
The research does not support:
❌ TL-only dogma
❌ Excessive reliance on L1
Instead, it supports a principled middle ground, one which, though less rhetorically appealing than absolutist positions, is far more consistent with both theory and practice:
Maximise meaningful TL input
Use L1 strategically and sparingly for:
Instructions
Clarification
Contrastive explanation
Repair
Conclusion
The real question is not:
“Should I use the TL or the L1?”
But rather—if I may phrase it thus—
“At this precise moment, what will maximise comprehension, minimise cognitive overload, and lead to better processing of the TL?”
If the answer is ten seconds of L1, then refusing to use it is not principled—it is ideological; and ideology, as we know, is a very poor substitute for learning. Whenever a staunch advocate of TL-only starts giving me their spiel as to why it is so key that the teacher speaks solely or 99% in the TL and that teaching resources are solely or 99% in the TL, I always ask the very same question: is there a chance that one student in the class may be left behind? If so, then it is unethical. Period.
The L1 should be used as pedagogical capital, at the right time, in the right way, in order to scaffold TL use. In EPI, for instance, this is achieved through gradually phasing out the L1 after a substantive input-to-output sequence where English is not spoken, but helps maintaining input-processing comprensibile and retrieval feasible.
Let me start with a slightly uncomfortable truth: in most classrooms, error is treated as something to minimise, avoid, or at best tolerate briefly before rushing to the “correct answer”. And yet, as I purport to show below, a growing body of research suggests something far more provocative: students dont need to get the answer right for retrieval practice to work. In fact, getting it wrong—under the right conditions—can make learning stronger (Kornell, Hays & Bjork, 2009; Grimaldi & Karpicke, 2012; Pan, Hutter & Rickard, 2020).
Whilst this is true, and this is crucial, please note: it’s not the mistake itself that does the learning; it’s what the mistake triggers in the brain.
The Illusion That’s Holding Us Back
Many of us implicitly believe that Retrieval practice only works if students successfully recall the answer. It sounds logical, doesn’t it? If nothing is retrieved, nothing is strengthened… right? However, research does suggest that this may be wrong. What appears to matter is not success, but rather the effortful retrieval followed by feedback (Roediger & Karpicke, 2006; Metcalfe, 2017).
What’s Actually Going On Under the Hood
When a learner attempts to retrieve something—even unsuccessfully—three powerful processes kick in.
1. The Brain Starts Searching in the Right Place
Even a failed attempt activates related knowledge.
For example:
You ask: “How do you say ‘I used to go’ in Spanish?”
A student says: “Yo fui… no… iba…?”
They haven’t got it right, but they’ve activated:
Past tense forms
First-person verb morphology
The semantic field of habitual action
When you now reveal “solía ir”, it doesn’t land in a vacuum—it attaches to an already activated network (Kornell et al., 2009).
2. Prediction Error Supercharges Learning
The moment the correct answer appears, the brain detects a mismatch:
“What I thought ≠ what is correct”
That gap—what cognitive scientists call prediction error—is incredibly powerful for memory encoding (Metcalfe, 2017; Pan et al., 2020) by virtue of the cognitive comparison it elicits.
Example:
Student guesses: “Je suis allé souvent” for “I used to go”
You reveal: “J’allais”
That discrepancy creates a kind of cognitive jolt:
“Ah! So it’s imperfect, not passé composé.”
That moment seems to stick far more according to a growing body of research than passive exposure ever could
3. Attention Is Sharpened (Learned Attention in Action)
After a failed attempt, students pay closer attention to the correct form.
Compare:
Scenario A (no retrieval):
Teacher: “Today we learn ‘j’allais’.”
Student: passively nods
Scenario B (failed retrieval first):
Student struggles → guesses wrong
Teacher reveals answer
Now the student is thinking:
“Why was I wrong?”
“What did I miss?”
That’s deep processing, not surface exposure (Schmidt, 1990; Metcalfe, 2017).
But Here’s Where Most Classrooms Get It Wrong
This doesn’t mean:
“Let students guess wildly and hope for the best.”
Because unsuccessful retrieval only works under specific conditions. What you do with the mismatch between the correct and the learner’s wrong answer, is of the essence for the prediction error to lead to meaningful learning.
Example of Poor Practice (Too Much Guessing)
Teacher:
“Guess what ‘der Schmetterling’ means.”
Students:
“Dog?”
“Car?”
“Tree?”
This is not retrieval, this is random guessing. Learning is minimal because there is
No prior knowledge activated
No meaningful network search
No learning advantage (Grimaldi & Karpicke, 2012)
Example of Effective Unsuccessful Retrieval
Students have already:
Heard “der Schmetterling fliegt” several times
Seen it in a sentence builder
Processed it in listening
Now you ask:
“What was ‘butterfly’ again?”
Student:
“Der… something… flug… no…”
Then you reveal:
der Schmetterling
Now the conditions are right:
Prior exposure
Effortful search
Immediate feedback
That’s where the magic happens (Kornell et al., 2009; Pan et al., 2020).
Classroom Examples You Can Use Tomorrow
1. Pre-question before input (the “productive failure” move)
Before a listening:
“How do you think French expresses ‘I used to play’?”
Students attempt:
“Je jouais?” / “J’ai joué?”
Then they hear it multiple times in context.
Result: deeper encoding than if you had just told them first (Kornell et al., 2009).
2. Sentence Builder Recall with Deliberate Struggle
After modelling:
Cover part of the sentence builder
Prompt: “Say: ‘At the weekend I used to go to the cinema’”
Student:
“Le week-end… je… vais… non… j’allais au cinéma?”
Even if imperfect, the attempt activates structure → correction sticks (DeKeyser, 2007).
3. Narrow Listening with Retrieval Pauses
Play: “Quand j’étais petit, j’allais au parc…”
Pause: “What verb did you just hear for ‘I used to go’?”
That loop is where learning consolidates (Roediger & Karpicke, 2006; Metcalfe, 2017).
Conclusion
Retrieval practice does not require success. It requires effort, prior exposure, and feedback. Or, more provocatively: It’s not getting it right that builds memory—it’s trying, failing, and then being corrected that does the real work (Kornell et al., 2009; Pan et al., 2020). If your students are always getting answers right immediately, you may not be strengthening memory at all. You may just be making performance look good in the moment. And as we both know, in language learning:
Performance is not learning (Bjork & Bjork, 2011).
This guest post by Céline Courenq, Head of Languages at Patana International School in Bangkok, explores how one school moved away from “covering” content to achieving genuine sentence control through the Extensive Processing Instruction (EPI) framework. It is an essential read for any language teacher—and particularly those using EPI—because it provides a transparent, “at-scale” blueprint for moving from theory to classroom reality. Why it is so useful:
Evidence of Impact: It documents how a shift from cognitive overload to the MARSEARS cycle transformed student confidence and fluency.
Practical Implementation: Learn how to restructure units around communicative outcomes and “non-negotiables” rather than traditional topic lists.
Resource Revolution: Discover how to turn Knowledge Organisers from passive lists into active retrieval tools for phonics and sentence building.
Curriculum Resilience: It offers strategies for maintaining high standards and consistent progress even amidst high student turnover or staffing changes.
I want to thank Céline for this great contribution to the Language Gym blog and for being a staunch advocate of the EPI approach. Let me also congratulate her on having just obtained the EPI accreditation, graduating with a well-deserved DISTINCTION grade.
Implementing EPI at scale
Setting the scene
For several years, our KS3 curriculum looked ambitious. Students met a great deal of grammar early, we moved quickly through content, and assessments often included vocabulary and structures that students had not yet fully secured.
By the end of Year 9, however, something was clearly not right. Students had encountered a lot of language, but they could not reliably use it. Writing frequently relied on translation from English. Pronunciation was insecure, particularly in French, which then affected listening and speaking, as students struggled to connect written forms to sound. Some languages masked this slightly because pronunciation is more transparent, but the underlying issue was the same across languages: students lacked sentence control. In EPI, this reinforced the importance of listening and reading as primary modelling mechanisms rather than comprehension checks.
Students were encountering too much language too quickly for it to transfer into long-term memory. What we had perceived as progression was, in reality, cognitive overload. What we had labelled “challenge” was not producing the outcomes we wanted. The decisive shift occurred when teachers stopped asking “Have we covered this?” and started asking “Can they say this without thinking?”
This is where the EPI framework resonated strongly with us. It offered a different way of conceptualising curriculum design: sentence control was no longer a by-product of topic coverage, but the organising principle of what we did.
Making the Learning Architecture Explicit: MARSEARS in Practice
In practice, our curriculum aligns closely with the MARSEARS cycle, even when it is not foregrounded explicitly in lessons. Listening and reading are treated first and foremost as modelling tools, not as assessment checks. Students encounter carefully patterned input that repeatedly exposes them to the target structures in meaningful contexts before any expectation of independent production. This reduces guesswork, limits translation from English, and supports more secure pronunciation.
At unit level, this looks like: • M (Modelling): sentence builders, phonics instruction and teacher modelling provide clear, controlled input • A (Awareness): brief, focused attention to a key grammatical or phonological feature (minutes rather than a traditional ‘grammar lesson’) • R/S (Receptive processing & Structured practice): input flood, processing tasks and systematic recycling across lessons • E (Emergence): structured, then semi-structured output once patterns are secure
More explicit grammar teaching is used selectively later as consolidation rather than as a starting point. Making this sequence visible helped us ensure that progression was built on automatisation rather than assumption
What followed was not a change of resources, but a rethink.
We built a KS3 model designed to produce automatic sentence control under real constraints: tightly constrained non-negotiables, explicit phonics, cumulative retrieval, and assessment aligned to what has actually been taught and practised. The aim was to reduce overload, increase security, and make progress consistent across classes and cohorts.
1) What a Year 7 Unit Actually Looks Like
We now teach three KS3 units per year in each language. These units are not topic-driven in the traditional sense; instead, they are built around carefully constrained communicative outcomes.
A typical Year 7 Unit 1 focuses on students being able to: • introduce themselves • give personal information (name, age, birthday, where they live) • talk about family, pets and basic relationships • describe who they get on with and do not get on with
The aim is not vocabulary coverage. The objective is to achieve reliable mastery of a limited set of commonly used sentence structures.
From the outset, students work with: • sentence builders (chunked language) • explicit phonics instruction • deliberately limited grammar (e.g. être, avoir, a small number of -er verbs, adjective agreement in French) • high-frequency language recycled across the year
The unit is designed so that, by the end, students can produce accurate personal information without translating from English.
• repeated listening and reading input modelling the unit structures
• short phonics focus linked directly to the core language
• guided sentence manipulation using the sentence builder
• frequent retrieval of previously secured non-negotiables
• brief, low-stakes checks to surface gaps early
• structured speaking or writing using only taught material
• teacher-led feedback focused on accuracy, control and pronunciation
The emphasis is on repetition, clarity and cumulative security rather than lesson-by-lesson novelty
2) What the Knowledge Organiser Is For
Our knowledge organiser is not a vocabulary list. It is a retrieval and automatisation tool, designed to support both students and staff.
For students, it provides a single, manageable reference point for the unit. Each student receives a printed copy and uses it actively: highlighting phonics, underlining silent letters, marking liaisons, and tracking patterns. The organiser becomes a working document rather than a revision sheet.
For staff, it functions as a shared unit map. Everything sits in one place, supporting consistency across classes and reducing cognitive load for teachers and learners alike. Instead of multiple disconnected documents, the organiser anchors lesson planning, retrieval practice and assessment.
Each organiser contains, in one place: • a phonics grid (sound–spelling correspondences) • core verbs clearly laid out • reminders about gender and agreement • alphabet, numbers and key question forms • sentence-builder language organised for manipulation • a WAGOLL illustrating what success looks like • clear success criteria
This ensures that what is taught is what is practised, and what is practised is what is assessed.
For each unit and year group, we define explicit non-negotiables: the small set of elements students must control before we move on.
In Year 7 French Unit 1 , these include: • core forms of être and avoir across key pronouns • a limited set of high-frequency verbs • adjective agreement in controlled contexts • a constrained lexicon with high reuse value • key question forms
These appear consistently in sentence builders, the knowledge organiser, retrieval routines, speaking practice and assessment checklists.
4) How We Decide What Not to Teach
Equally important is what we deliberately avoid: • early introduction of multiple tenses • unnecessary full paradigms • vocabulary added purely for perceived “challenge” • moving on simply because something has been covered
Challenge is redefined as security and control, not content volume.
5) How Assessment Is Reverse-Engineered from Sentence Control
Assessment is built backwards from the intended communicative outcome. Students are assessed only on language they have practised repeatedly. Strands for task completion, accuracy, range and complexity, pronunciation and fluency are separated so that performance is visible and meaningful.
Reading, listening and grammar tasks recycle the exact unit language. Assessment measures security, not exposure.
6) How Phonics Is Embedded Across Units
Phonics is treated as foundational. Students annotate phonics directly on their knowledge organiser and revisit patterns through reading, speaking and listening tasks. This had a significant impact, particularly in French, where insecure sound–spelling mapping had previously undermined confidence and accuracy.
7) How We Made This Work with Staff
We did not begin by dividing into language-specific teams. Instead, we worked in small mixed groups across languages and started by discussing skills: what did we want students to be able to do by the end of KS3, and what did they need in order to succeed at KS4?
Only after agreeing on these outcomes did we identify the grammar and vocabulary required to support them. This moved us away from topic coverage and towards sentence control.
This was supported through: • sustained professional dialogue around EPI principles • shared unit structures • common markbooks • internal reading and reflection • targeted EPI workshops
We also benefited from external professional dialogue at key points. An early FOBISIA event delivered by Dr Gianfranco Conti helped shape our initial thinking. Several years later, he revisited the school, observed lessons and provided feedback on how our implementation had developed into a coherent KS3 system.
8) International Turnover: Making the Curriculum Resilient
The clarity of the unit structure and knowledge organiser makes the curriculum immediately visible for new students. Retrieval routines allow students to catch up through repetition rather than reinvention.
9) Tracking Security, Not Coverage
Regular retrieval and shared tracking focus attention on automaticity. Markbooks track non-negotiables over time so patterns are visible and intervention can be targeted early.
10) Obstacles and What We Learned
This change did not happen in ideal conditions. We worked within timetable constraints, textbook-driven habits, a culture equating challenge with more grammar, high student mobility, and the lingering effects of Covid on middle years.
There was also a period of trial and error as we sought the right balance between being overly prescriptive and insufficiently structured. Early attempts to impose tight pacing led to unintended pressure to rush content. We therefore made a conscious decision that: • we assess only what has genuinely been taught • we do not rush to “fit” a scheme of learning • pacing adapts to the students in front of us
Assessment became a reflection of learning, not a deadline.
Validation from senior leadership was crucial in allowing us to prioritise security over surface complexity. Impact was gradual, as expected. Language learning is exposure-based and non-linear; automatisation only becomes visible after sustained retrieval and recycling.
Final Reflection When KS3 is treated as structurally rigorous and cognitively realistic, KS4 outcomes improve naturally. EPI gave us a framework for designing a curriculum in which every element works towards secure sentence control, and where confidence is built on something solid.
One of the most efficient ways to teach grammar explicitly, without overwhelming students with full paradigms or overloading working memory, is through a technique often referred in the literature as contrastive pairs.
The technique is quite simple. First off, you present learners with two near-identical sentences that differ in only one meaningful way. You then ask them what changes, what stays the same, and what that change means. Next, you isolate the contrast. Then you provide a minimal rule. Finally, you practise the difference. That’s it.
The key thing is that only one variable changes whilst the rest remains stable. This reduces noise and sharpens attention.
Note that contrastive pairs are not about teaching an entire tense system or unpacking every exception. They are about clarifying one functional boundary at a time.
Used properly, they are precise, economical and cognitively aligned with how learners refine emerging grammatical control.
Why Contrastive Pairs Work (The Scientific Rationale)
There are several strong theoretical reasons why this technique is effective. First, contrastive pairs reduce cognitive load. By isolating a single difference, they minimise the number of elements students must hold in working memory (Sweller, 1998; Sweller et al., 2011).
Second, they sharpen noticing. Learners must attend to form in order to detect the meaningful difference (Schmidt, 1990), hence, attention is not diffused across a whole system. It is directed solely to one contrast.
Third, they support form–meaning mapping, a key step in effective grammar acquisition, as the latter must encompass Form, Meaning and Use. According to input processing theory (VanPatten, 2015), however, learners prioritise meaning over form. Contrastive pairs help them see how a small formal change affects meaning. In the Expansion phase, this refines partially proceduralised knowledge (DeKeyser, 2007).
Finally, research on explicit instruction suggests that clarification after exposure strengthens accuracy without replacing acquisition processes (Ellis, 2006). In my approach, EPI, clarification after exposure is the key underlying principle when teaching grammar.
In short, contrastive pairs clarify and stabilise learning — they do not initiate it.
The Step-by-Step Contrastive Routine
Here is what a clean, disciplined sequence could look like.
Step 1: Present two near-identical sentences students already know
The sentences must be familiar. You are not introducing new vocabulary (in EPI by the time grammar is taught, the students will have already internalised the unit-at-hand target vocabulary).
Example:
María es aburrida.
María está aburrida.
Ask:
What changes?
What stays the same?
Step 2: Identify the single meaningful difference
Guide learners to articulate what the change signals.
In this case:
es aburrida → describes personality (permanent trait)
está aburrida → describes current state (temporary condition)
Do not move beyond that boundary
Step 3: Give the minimal rule
Just provide one sentence. No lecture.
Use ser + adjective for permanent characteristics. Use estar + adjective for temporary states.
Stop there.
Step 4: Controlled contrast practice
Now the contrast must matter. For instance:
A/B choice
Mi profesor (es / está) simpático hoy.
Londres (es / está) una ciudad grande.
Sorting task Column A: permanent characteristics Column B: temporary states
Transformation Change a permanent sentence into a temporary one.
Step 5: Semi-structured task using both forms
Now learners must use both contrasts within one communicative task.
Example:
Describe two people: – their personality – how they feel today
Checklist:
At least two examples of ser + adjective
At least two examples of estar + adjective
The contrast must be necessary to complete the task successfully.
Where Contrastive Pairs Fit in EPI’s MARS-EARS framework
Contrastive pairs belong in the Expansion phase, after learners have already processed the structure during MARS.
During MARS, students:
encounter the structure repeatedly,
build form–meaning associations,
begin to use it in constrained contexts.
In Expansion, we:
clarify boundaries,
prevent overgeneralisation,
build exam-safe accuracy.
Contrastive pairs are not used during early modelling or awareness-raising, because premature comparison can interrupt natural form–meaning mapping.
In MARSE-EARS, contrastive pairs refine and stabilise learning. They do not initiate it.
A Sample Full Sequence: SER + Adjective vs ESTAR + Adjective
Let’s situate this properly within MARSE-EARS.
Before Expansion, learners have already encountered both forms repeatedly during MARS:
through sentence builders
through listening input
through guided oral rehearsal
through limited structured output
They recognise the patterns. Now we refine them.
Expansion Lesson Outline
Re-entry (5 minutes) Quick retrieval of familiar sentences containing both forms.
Word order contrasts (German verb-final; French pronoun placement)
Preposition contrasts (por/para; since/for)
With Which Structures They Work Less Well
Large irregular paradigms with many unpredictable forms
Morphology without a clear meaning difference
Vocabulary-heavy distinctions
Abstract discourse-level grammar
Structures not yet encountered through input
Contrastive pairs require prior exposure. Without it, they become rule-teaching by another name.
Conclusion: Why This Matters
Contrastive pairs are not a fashionable trick, but rather are a disciplined way of making grammar clearer, lighter and more cognitively realistic. When used at the right time — after meaningful exposure — they help learners:
sharpen boundaries between similar forms
prevent overgeneralisation
build accuracy for exams
and move from “I recognise it” to “I can control it”
They work because they respect how learning happens: meaning first, clarification second.
Bottom line for teachers
If you want contrastive pairs to work:
Use them after exposure, not before.
Change one variable at a time.
Give a minimal rule, not a full lecture.
Practise the contrast, not the entire system.
Move quickly back into meaningful use.
Contrastive pairs are by no means the engine of acquisition. Rather, their role is to tighten and stabilise acquisition. In my experience, if the technique is used sparingly and strategically within MARSEARS, they make grammar clearer — and students more confident — without ever overwhelming them.
I’ve lost count of how many times, over the years, I’ve seen colleagues — often intelligent, well-intentioned, committed teachers — get themselves tangled up in what is, in essence, one of the most persistent (and, if I’m honest, most sterile) debates in language education: lexicogrammar versus traditional grammar instruction.
You know the script, don’t you? One camp is presented as enlightened, modern, acquisition-friendly, almost morally superior; the other as old-fashioned, rule-obsessed, joyless, and probably responsible for everything that has ever gone wrong in language teaching since the dawn of time! And every time I hear this framing, I find myself thinking: ma perché? Why are we still doing this to ourselves?
Because the truth — and it’s a truth we’d all be better off accepting — is that these two approaches are not enemies at all. They are simply two different lenses through which we observe the same linguistic system. Used intelligently, and at the right moment, they do very different jobs, and both jobs matter.
What traditional grammar instruction is really about (and why I haven’t thrown it out of the window)
Traditional grammar instruction, at its core, treats language as a rule-governed system: something that can be described, categorised, labelled, and explained, preferably with neat headings and reassuring terminology.It focuses on things like:
parts of speech
verb tenses and conjugations
agreement rules
sentence structure
accuracy and error correction
In real classrooms — real ones, not the imaginary ones in methodology books — this usually looks like:
explicit explanations (“The imperfect is used for habitual actions…”)
verb tables (lots of them…)
controlled gap-fills
error-spotting tasks
long stretches of metalanguage
translation tasks
Now, let me say this very clearly, because it matters: there is nothing inherently wrong with this. In fact, traditional grammar instruction does some things extremely well. It:
gives learners clarity and labels
supports accuracy, especially in writing
helps with exam performance
appeals to analytical learners
gives teachers the feeling that they have…done their duty!
I’ve seen this countless times. I’ve worked with pupils — and adults, for that matter — who needed that clarity, who felt calmer once something had been named, boxed, explained. So no, grammar is not the villain of the piece.
The problem only starts when grammar becomes the starting point (!), rather than what it should be: a supporting tool.
Because we have all met them, haven’t we? The students who can explain a rule beautifully, recite it almost poetically, apply it flawlessly in a written exercise… and then collapse entirely when asked to process the same structure in listening, or produce it spontaneously in speaking. They understand how the engine works, yes — but they’ve barely driven the car, let alone taken it on the motorway. In other words, they have develop Declarative Knowledge of the target grammar structure but have not built the Procedural knowledge required in the real world, the one that makes fluent retrieval possible in the streets of Paris, Berlin, Rome or Madrid.
What lexicogrammar is about (and why it feels so much closer to reality)
Lexicogrammar starts from a different assumption — one that, to me at least, feels far more psychologically plausible — namely that grammar and vocabulary are not separable, and that language is processed primarily as patterns and chunks, not as isolated rules waiting to be memorised. From this perspective:
meaning drives form
words come bundled with grammar
frequency matters enormously
grammar emerges from repeated exposure to patterned input
So instead of solemnly announcing that today, class, we are learning the present tense, learners simply meet:
je joue au foot
je vais au cinéma
je fais du sport
Instead of dissecting “because + clause”, they repeatedly encounter:
parce que c’est amusant
parce que c’est trop cher
Instead of a formal lecture on “the imperfect”, they live with:
quand j’étais petit…
il y avait beaucoup de…
And here’s the thing — and anyone who has actually watched learners process language will recognise this — this is how acquisition really happens. Lexicogrammar:
reduces cognitive load
supports fluency
builds automaticity
feeds listening and speaking directly
I’ve seen this play out so many times in my own classrooms and workshops that it would be dishonest to pretend otherwise. Learners use language earlier, more confidently, and with far less visible strain. But — and this is important — left entirely unchecked, lexicogrammar can also lead to problems, as accuracy can plateau, editing skills can remain weak and high-stakes writing and exams can expose gaps that intuition alone doesn’t fix.
The real difference — and the false opposition that refuses to die
So what is the real contrast here? It’s not “grammar versus lexicogrammar”, despite how tempting that slogan might be. In my view, it is:
rule → example versus example → pattern
form-led teaching versus meaning-led teaching
declarative knowledge versus procedural knowledge
Traditional grammar explains correctness whilst lexicogrammar enables use. Both matter — but not at the same moment. And this, I would argue, is where so much of the confusion comes from!
Where grammar and lexicogrammar intersect: pattern awareness
This, for me, is the crucial point, the hinge on which the whole debate turns.
Good lexicogrammar teaching:
floods learners with high-frequency patterns
makes those patterns noticeable
recycles them relentlessly
builds intuitive familiarity
Good grammar instruction:
names patterns learners already recognise
explains contrasts they’ve already experienced
sharpens accuracy once meaning is secure
In other words — and I cannot stress this enough — grammar works best when it explains what learners have already partially acquired.
I’ve seen this so clearly with structures like il y a and il n’y a pas de. If learners have met these dozens of times in listening and reading, a later explanation of existential structures and negation doesn’t feel abstract, theoretical, or painful. It feels clarifying. Almost relieving.
That is the intersection point which I have witnessed time and again throughout my career.
Lexicogrammar vs traditional grammar: a functional comparison
Dimension
Lexicogrammar
Traditional grammar
Starting point
Meaningful examples
Abstract rules
Teaching sequence
Example → pattern
Rule → example
Unit of learning
Chunks and constructions
Individual forms
Cognitive load
Lower (pattern recognition)
Higher (rule processing)
Primary knowledge type
Procedural
Declarative
Best supports
Listening, speaking, early writing
Editing, accuracy, exams
Risk if overused
Accuracy plateau
Inert knowledge
Where EPI fits in: doing both, deliberately (and unapologetically)
This is precisely where Extensive Processing Instruction (EPI) sits — and why it is so often misunderstood, sometimes caricatured, and occasionally dismissed by people who, I suspect, haven’t really looked at it closely.
Let’s be clear: EPI is not anti-grammar. It is anti PREMATURE grammar.
How EPI handles lexicogrammar
EPI starts where acquisition actually starts — not where tradition says it should start:
high-frequency chunks
narrow, carefully controlled input
repeated processing of the same structures
listening and reading as the engine of learning
Learners don’t “learn the imperfect”. They process:
il y avait…
quand j’étais petit…
c’était…
They don’t “learn negation”. They process:
je n’aime pas…
il n’y a pas de…
Grammar is embedded, unavoidable, and meaningful — but never shoved to the front before the system is ready. I’ve watched pupils who had “done the imperfect” three times before suddenly get it — not because of a better explanation, but because the input finally made sense.
How EPI handles grammar instruction
And here’s the bit that often gets conveniently ignored: EPI does not stop there.
Once patterns are:
familiar
automatised
meaning-secure
EPI introduces explicit grammar instruction as:
a noticing tool
a refining tool
a precision tool
At that point — and only at that point — explanations stick. They make sense. They actually improve output. I’ve seen learners nod, not because they’re being polite, but because something has genuinely clicked.
This is grammar from language, not grammar before language.
Why EPI works: research-rooted advantages
EPI advantage
Rationale
Research source
Reduced cognitive load
Learners process meaning before form, avoiding working-memory overload
Sweller; VanPatten
Faster automatisation
Repeated exposure to the same structures builds procedural memory
For a long time now, and certainly across many of the classrooms I have visited in recent years teachers have been working hard to improve GCSE writing outcomes while quietly sensing that something was not quite lining up. Pupils write more, practise more, produce longer pieces, and yet the grades often fail to match the effort, which understandably leads to frustration and, in some cases, resignation.
The arrival of the new GCSE writing paper has not been dramatic or loudly announced, but in my view represents a decisive shift in what is being assessed, and my prediction is that many departments will only fully realise this once mock results begin to expose patterns that feel unfamiliar.
For years, GCSE writing has been taught on the assumption that effort, creativity and risk-taking would somehow be rewarded, that “having a go” was a sensible strategy, and that ambitious language could compensate for shaky control. That assumption no longer holds – sadly.
The new GCSE writing paper rewards accuracy, control and task fulfilment, not linguistic risk-taking, and this, far from being a matter of preference or interpretation, is built directly into the mark schemes.
This article, written as I prep my upcoming workshop on this topic in a couple of weeks time, examines what the new writing paper actually assesses, how marks are generated, where the critical thresholds lie, and what this means for classroom practice.
Structure of the new GCSE writing paper
The writing paper now consists of a tightly constrained set of tasks designed to test retrieval and control under pressure rather than expressive freedom. A central feature is the inclusion of English into target language translation as a compulsory writing task.
In the translation mark scheme, marks are awarded for:
accurate transfer of meaning
correct vocabulary selection
grammatical accuracy, including tense, agreement and word order
There are no marks for approximation. Responses either convey the intended meaning or they do not.
Translation removes choice, paraphrase and creativity, and forces precise retrieval under time pressure, hence it is structurally impossible to succeed by merely “having a go”. This task alone signals that the assessment is testing control, not effort or intention.
Stimulus coverage and task fulfilment
Across both the 90-word and 150-word writing tasks, candidates are required to respond directly to bullet-point prompts.
The paper explicitly states that to achieve the highest marks, candidates must write something about each bullet point. In the mark scheme, failure to address all bullet points caps access to the higher bands, regardless of linguistic ambition elsewhere.
As a result, a response that uses adventurous language but misses part of the stimulus will score lower than a response that uses simpler language but completes the task fully. This is a structural feature of the assessment which I personally don’t agree with but we have to live with.
Use of familiar language
Both exam boards assess writing through the use of:
a range of familiar vocabulary and structures.
This phrase is central to understanding the assessment construct.
The mark schemes do not reward originality or experimentation. They reward:
accurate selection of known language
consistent use
successful manipulation
Lower bands are characterised by frequent errors and loss of control whilst higher bands are characterised by secure handling of familiar language. The construct being assessed becomes procedural control (!), not creative expression as in the past GCSE.
Accuracy and band progression
Nowhere in the mark schemes does “ambitious but inaccurate” outperform “simple but accurate”. This stifles the risk-taking attitude that you would want language pedagogy to encourage and reward. A huge mistake which is a legacy of the NCELP’s flawed pedagogy.
So, in the new GCSE, ambition that destabilizes accuracy typically pushes candidates down a band. Errors are tolerated only insofar as they do not impede communication, and that tolerance is limited. The mark scheme allows a small number of minor errorsonly if meaning remains clear and control is otherwise secure, but once errors become frequent, patterned, or start to affect clarity, the response is automatically pushed into a lower band. In other words, accuracy is not judged generously or holistically: there is a clear ceiling beyond which additional errors are no longer “overlooked”, even if the ideas are good or the language is ambitious.
This means that the long-standing advice to “take risks” in GCSE writing is no longer assessment-neutral Under the new mark schemes, risk-taking without control is actively disadvantageous.
Timeframes as a retrieval challenge
To access the higher bands, candidates must refer to more than one timeframe and do so accurately. A candidate who uses two time frames accurately will usually score higher than a candidate who attempts three timeframes inaccurately.
In other words, time frames function as a retrieval stress test, not as an opportunity to demonstrate grammatical ambition.
Cognitive pressure built into the assessment
The writing paper combines:
multiple tasks
strict time limits
bullet-point constraints
word limits
no dictionary
accuracy-focused marking
The mark schemes then reward consistency, reliability and control – deliberate cognitive load engineering. The assessment measures what candidates can retrieve and control when working memory is stretched.
What distinguishes Grades 7–9 from Grade 6
This is the threshold that is most often misunderstood by the teachers I have worked with recently
Characteristics of a secure Grade 6 response
A Grade 6 response typically:
addresses the task adequately
covers most or all bullet points
uses familiar vocabulary correctly much of the time
attempts more than one timeframe, but not always securely
shows some inconsistency in accuracy
may rely on memorised chunks or repetition
In mark-scheme terms, this represents competent communication, but not sustained control.
What changes at Grade 7
The move from Grade 6 to Grade 7 is not about writing more or using more complex grammar.
It is about stability under pressure.
A Grade 7 response:
addresses all bullet points precisely and efficiently
uses a range of familiar structures with consistent accuracy
handles at least two timeframes securely
maintains accuracy as sentence length increases
shows conscious control rather than chance success
Errors are occasional rather than frequent, and accuracy does not deteriorate as the task progresses.
What characterises Grades 8–9
At Grades 8 and 9, nothing fundamentally new appears. What improves is:
consistency
reliability
density of accurate language
Responses at this level sustain accuracy across the entire task, integrate timeframes naturally, and show minimal breakdown in meaning. This reflects automaticity, not flair.
The critical implication
The difference between a Grade 6 and a Grade 7 is not creativity or risk-taking, but the ability to retrieve and control familiar language reliably under exam conditions.
Comparison of AQA and Edexcel writing assessments
Despite differences in layout and wording, both boards assess writing in fundamentally the same way.
Feature
AQA
Edexcel
Assessment implication
Core construct
Control of familiar language
Control of familiar language
Identical construct
Translation
Integrated into writing paper
Integrated into writing paper
Retrieval over creativity
Stimulus coverage
Explicit bullet-point requirement
Task fulfilment weighted
Task control is essential
Accuracy weighting
Strong emphasis
Strong emphasis
Ambition without control is penalised
Timeframes
Multiple accurate timeframes required
Multiple accurate timeframes expected
Retrieval under pressure
Creativity
Rewarded only if accuracy is secure
Rewarded only if accuracy is secure
Creativity is conditional
Despite presentational differences, both boards reward the same thing: stable, accurate, task-focused writing under pressure.
Who is likely to struggle with the new format
The pupils who are most likely to struggle are not necessarily the weakest linguists, but those whose learning has been built on habits that the new assessment penalises.
These include:
pupils reliant on memorised essays
pupils encouraged to take grammatical risks they cannot control
pupils with weak listening and reading foundations
pupils trained to prioritise ideas over linguistic control
Departments that delay systematic work on accuracy, sentence-level control and retrieval until late in the course are also likely to face difficulties, as habits formed earlier are hard to reverse.
Why “having a go” no longer aligns with the assessment
Taken together, the mark schemes:
penalise inaccuracy
cap incomplete responses
reward familiar language
prioritise task fulfilment
embed translation as a control task
As a result, the assessment rewards control under cognitive pressure, not effort or ambition.
Implications for teaching practice
Writing is no longer assessed as expressive output. It is assessed as accurate retrieval of automatised language under constraint.
Teaching writing primarily through early free production now conflicts with:
the assessment objectives
the band descriptors
the cognitive design of the paper
This is not a matter of preference or ideology, but one of construct validity In my upcoming workshop on how to ace the GCSE writing paper in early February I will deal extensively with what teachers can do to prepare their students effectively for this tricky paper
Conclusion
The new GCSE writing paper has not made writing harsher. It has made misalignment between teaching and assessment visible.
If writing continues to be taught as though effort, creativity and risk can compensate for weak control, outcomes are unlikely to improve. If, however, teaching aligns with what the mark schemes actually reward — familiarity, retrieval and accuracy under pressure — then the path forward becomes clearer.
In my view, pupils become better writers not by writing earlier or more freely, but by writing later, with less choice, and with language they have processed deeply and repeatedly. Once this is understood, the logic of the new GCSE writing paper becomes difficult to argue with.
After twenty-eight years in the classroom I feel reasonably confident saying that students are far more consistent in what they value in their language teachers than we, as a profession, often like to admit Perhaps because their priorities do not always sit comfortably alongside inspection frameworks, policy BS, or whatever pedagogical fashion happens to be doing the rounds this term…
In my experience, and I say this having taught across different countries, systems and policy cycles that arrived with great noise and left quietly through the back door, students are not primarily concerned with whether we are methodologically fashionable, digitally fluent, or capable of delivering a textbook-perfect observed lesson at 9am on a wet Monday morning. What they care about,often desperately, is whether learning in our classroom feels emotionally survivable,cognitively manageable, and worth the effort on days when motivation is fragileand confidence even more so. And those days, sadly, come around in my experience more often than we sometimes admit.
Unfortunately, this is not always the order in which we are trained to think about teaching, is it…
2. What students value (ranked, explained, and grounded in research)
1. Empathy and emotional support — ranked as number one
In my opinion, and very much in line with what the research has been telling us for decades now, empathy sits firmly at the top because language learning without emotional safety simply does not function, however elegante our schemes of work may look on paper.
Students value teachers who are patient, non-judgemental, emotionally predictable, and who create classrooms where making mistakes does not feel like a personal failure played out in public, lesson after lesson. Research on affect in language learning, from Arnold’s early work through to Dewaele and Mercer’s studies on teacher empathy, shows that anxiety and fear of negative evaluation significantly reduce willingness to communicate, participation, and risk-taking – in other words the prerequisites for any significant language learning to happen! On the other hand, Rebecca Oxford’s work on learner well-being reinforces the idea that emotional variables are not decorative extras but structural foundations.
In my experience, when this emotional safety is missing, students rarely kick off or complain loudly; instead, they withdraw quietly, comply politely, and disappear cognitively,,, which is far easier to miss and far harder to reverse.
2. Ability to motivate and inspire — ranked as number two
Motivation comes a very close second because, sadly, language learning is a long game with few immediate rewards, and in my experience without sustained encouragement many learners simply decide that the effort required is not worth the emotional cost involved, especially in exam-driven contexts.
Research on L2 motivation, particularly by Dörnyei, shows that teacher behaviour — not just task design — plays a decisive role in sustaining learner effort over time. Guilloteaux and Dörnyei demonstrated that motivational teaching practices are linked to higher levels of engagement, while Lamb’s more recent work shows how teacher encouragement can keep learners going even when intrinsic interest is fragile and the syllabus feels relentless.
In my opinion, students are rarely demotivated by difficulty itself; they are demotivated by the creeping sense that effort leads nowhere,, and teachers who keep belief alive — often quietly, without fireworks — make an enormous difference here, even if nobody applauds them for it!!
3. Clarity of explanation and instructional clarity — ranked as number three
Clarity sits in the middle of the ranking because confusion is both cognitively and emotionally exhausting, and students place far more value than we sometimes realise on teachers who help them feel oriented rather than overwhelmed.
Hattie’s synthesis of classroom research consistently identifies teacher clarity as a strong predictor of achievement, while Borg and Macaro remind us that clarity in language teaching is not about dumbing things down but about sequencing, scaffolding, and making form–meaning relationships visible. Students want to know what they are learning, why they are learning it, and what success looks like today, not at some vague point in the future.
Personally, I have always been less impressed by how eloquently I explain something and far more interested in whether anyone can actually tell me what they are supposed to be learning and why, a shift that has saved me from many self-inflicted illusions,, and a few awkward lessons too.
4. Strong subject knowledge (language and pedagogy) — ranked as number four
Subject knowledge, while absolutely essential in my opinion, tends to be experienced indirectly by learners, which explains why students value it but rarely place it at the very top of their rankings.
Research on teacher language awareness, particularly by Andrews and Mullock, shows that deep subject knowledge improves modelling, explanations, and responsiveness to learner questions, while Borg’s work on teacher cognition reinforces the idea that what teachers know shapes what they notice and how they respond in real time.
In my experience, students assume competence as a baseline; they notice subject knowledge most when it is missing, when explanations wobbles, examples feel shaky, or questions are dodged a little too quickly… when expertise is strong, it quietly underpins clarity, confidence, and trust without demanding centre stage.
5. Adaptability and responsiveness to learner needs — ranked as number five
Adaptability appears last not because it is unimportant — tutt’altro! — but because students often experience it implicitly rather than as a named quality.
Research on differentiation and learner-centred instruction, from Tomlinson through to more recent work by Papi and Hiver, shows that responsiveness to learner needs supports motivation, autonomy, and sustained engagement. From a learner’s point of view, however, adaptability often blends into perceptions of fairness and care rather than being recognised as a technical skill.
In my experience, students simply say that a teacher “understands” or “explains again if we don’t get it”, without any awareness of the pedagogical decision-making involved,, and that very invisibility is precisely why adaptability, though crucial, tends to sit lower in student rankings.
3. Summary table: what students value most (research-informed)
Quality valued by students
What this looks like in practice (student perspective)
Key research sources
Empathy & emotional support
Feeling safe to speak, make mistakes, and participate without fear of ridicule or judgement.
Dewaele & Mercer (2018); Arnold (1999); Oxford (2016)
Ability to motivate and inspire
Belief in improvement, encouragement to persist, and teacher enthusiasm sustaining effort over time.
Clear explanations, structured lessons, transparent goals, and reduced confusion.
Hattie (2009); Borg (2006); Macaro (2008)
Strong subject knowledge
Accurate modelling, confident explanations, and meaningful responses to questions.
Andrews (2007); Borg (2015); Mullock (2006)
Adaptability to learner needs
Adjusting pace, difficulty, feedback, and support in response to learners.
Tomlinson (2014); Papi & Hiver (2020)
4. Why the ranking looks the way it does
I strongly believe that the ranking above reflects the sequence of emotional and cognitive thresholds learners must cross in order to remain engaged in language learning. If students do not feel emotionally safe, they disengage. If they do not feel encouraged, they stop trying. If they do not feel oriented, they become frustrated. Wherever I have taught, be it in challenging inner-city area schools or posh private schools in rich neighbourhoods, only once these conditions were met did subject expertise and adaptability become fully visible and impactful from the learner’s perspective… simple, really, though not always easy.
So this ranking is not a claim about what matters most in teaching in some abstract, absolute sense, but about what students experience first and depend on most in order to keep going, which is a rather different question altogether.
5. Conclusion
After nearly three decades of teaching, observing, mentoring, and occasionally getting it wrong in public, I am increasingly convinced that what students value most in us aligns remarkably well with what research tells us facilitates language acquisition, even if this alignment is not always reflected in accountability systems or professional discourse.
Students are not asking us to be entertainers, influencers, or walking grammar encyclopaedias. In my experience, they are asking us — often quietly, often indirectly — to create conditions in which effort feels worthwhile, failure feels survivable, and progress feels possible….
Key references
Andrews, S. (2007). Teacher language awareness. Cambridge University Press.
Arnold, J. (Ed.). (1999). Affect in language learning. Cambridge University Press.
Borg, S. (2015). Teacher cognition and language education: Research and practice. Bloomsbury.
Csizér, K., & Nagy, B. (2020). The dynamic interaction of motivation, self-regulation, and autonomous learning in second language acquisition. System, 92, 102249.
Dewaele, J.-M., & Mercer, S. (2018). Do ESL/EFL teachers’ emotional intelligence, empathy and self-efficacy predict learner-centredness? System, 79, 31–43.
Dörnyei, Z. (2001). Motivational strategies in the language classroom. Cambridge University Press.
Guilloteaux, M. J., & Dörnyei, Z. (2008). Motivating language learners: A classroom-oriented investigation of the effects of motivational strategies on student motivation. TESOL Quarterly, 42(1), 55–77.
Lamb, M., & Arisandy, F. E. (2020). The motivational dimension of language teaching. Language Teaching Research, 24(3), 1–24.
Macaro, E. (2008). Language learner strategies: Thirty years of research and practice. Oxford University Press.
Mullock, B. (2006). The pedagogical knowledge base of four TESOL teachers. The Modern Language Journal, 90(1), 48–66.
Oxford, R. L. (2016). Toward a psychology of well-being in second language acquisition. Studies in Second Language Learning and Teaching, 6(1), 3–29.
Papi, M., & Hiver, P. (2020). Language learning motivation as a complex dynamic system. The Modern Language Journal, 104(1), 209–229.
Tomlinson, C. A. (2014). The differentiated classroom: Responding to the needs of all learners (2nd ed.). ASCD.
Every few months — sometimes more often than that — social media fills up with posts telling us that research proves a particular language-teaching method works best. You know the sort of thing. One week it is storytelling. The next week it is explicit grammar. The week after is EPI (Yes – guilty as charged!) Then it is “input only”, or “no output”, or whatever the label happens to be this time. The pattern never really changes.
And let me say this clearly before someone misreads what follows: the problem is rarely the research itself. The real problem is what happens after the research leaves the journal and enters the hands of people who are actively trying to justify selling their method, their training, their books, their conference slots, or their latest online course.
So, before we swallow yet another “research-backed” claim whole, it might be worth slowing down and asking a few boring but necessary questions… even if they spoil the excitement.
First of all: who were the learners in the study? This sounds so obvious that it almost feels insulting to mention it, and yet it is ignored with remarkable consistency. Language learning looks very different depending on whether we are talking about preschool children, primary beginners, secondary pupils working towards exams, or university students who already have years of schooling behind them. Findings from one group do not automatically apply to another, no matter how confidently someone says they do. When studies involving very young learners, or learners with extremely limited language, are used to justify methods for GCSE or A-level classrooms, surely the correct response is not enthusiasm but caution… or at least a pause.
Second — and this is absolutely central — what level did the learners start from? Baseline language ability is not a small technical detail tucked away in the methods section for researchers to worry about. In practice, it often explains more than anything else. When learners begin with almost no vocabulary, very little exposure to structured language, and limited experience of listening to or using language in organised ways, teaching does something very specific: it lays foundations. It does not suddenly accelerate learning in magical ways. So when an intervention produces gains in the exact words that were taught, but little beyond that, this is not surprising at all. It is exactly what we would expect. Using such findings to make claims about learners who already have years of exposure, larger vocabularies and basic narrative competence is not careful interpretation. It is misuse. Ignoring starting level is one of the easiest ways to turn sensible research into bad pedagogy.
Third: was the study actually about learning a second or foreign language? This is where things often get very slippery. Research from first-language literacy or early childhood development is regularly brought into language-teaching debates as if the jump were obvious and unproblematic. Yes, storytelling helps children develop language in their first language. No, that does not automatically mean it drives grammatical development in a second language. The same mistake appears when research on first-language grammar is used to justify grammar-heavy second-language teaching. Different learning problems, different conditions, different outcomes… surely that matters?
Fourth: what did the researchers actually measure? Did learners understand a story while it was being told? Did they recognise forms they had just seen? Did they do well on a test straight after instruction? Or did they show that they could use the language later, on their own, in new situations, without help? These are not minor distinctions. Much research reports short-term performance under controlled conditions, and that is fine. What is not fine is pretending that this is the same thing as long-term acquisition.
Fifth: was there a proper comparison group? Without a meaningful comparison, improvement does not tell us very much. Learners usually improve over time anyway. Teachers get better. Familiarity with tasks increases. Comparing an intervention to “no instruction” might look impressive, but it tells us almost nothing about whether the approach is better than other plausible alternatives.
Sixth: did the gains last? Were learners tested again weeks or months later, or was everything measured immediately, while the material was still fresh? Without delayed testing, claims about learning should be treated with extreme caution. Short-term gains are easy to produce. Long-term retention is much harder. We have known this for well over a century, and yet it is routinely ignored… why?
Seventh: was grammar actually tested in free use? Doing well in gap-fills, multiple-choice tasks, or highly scaffolded activities is not the same as using language accurately when speaking or writing under pressure. Too often, grammar learning is assumed rather than demonstrated. We should be asking whether learners can actually use what they supposedly “acquired”, not whether they can recognise it when prompted.
Eighth: is one element being sold as the whole solution? Input matters. So does practice. So does feedback. So does retrieval. So does time. When one of these is isolated and presented as the answer, something has gone wrong. No serious account of learning supports the idea that a single ingredient, however valuable, can replace everything else.
Ninth: what do the authors themselves say about limits? Most researchers are careful. They talk about context. They talk about constraints. They warn against overgeneralisation. When these warnings quietly disappear in conference talks or social-media posts, that should worry us. If the caveats are gone, the research has already been bent out of shape.
Finally: who stands to gain from this interpretation? This is uncomfortable, but unavoidable. When claims are tied to training packages, branded methods, books, or speaking circuits, there is an incentive to oversell. That does not automatically make the research wrong. But it should make us sceptical… very sceptical.
The study looked at a structured story-based intervention delivered to very young children from low-income backgrounds who started with extremely weak language. The children improved on the vocabulary that was explicitly taught to them. That is a sensible and useful finding. What it does not show is that storytelling leads to broad second-language acquisition, that it replaces explicit instruction, or that it applies to older learners in secondary schools. The learners were preschoolers. Their starting level was extremely low. The effects were narrow and closely tied to what was taught. The context was highly specific. And the authors themselves warned against overgeneralisation.
Turning that into an argument for TPRS as a general solution is not research-informed practice. It is spin. After all, TPRS has been around for ages, yet it is a very niche approach in secondary education. Even if we ignore the research misuse, there are practical reasons why TPRS has never really taken hold in mainstream state secondary education. It depends heavily on unusually skilled and confident teachers. It makes syllabus coverage difficult to guarantee. It works far better with beginners than with learners who need increasing accuracy. It does not align well with exam demands. And it takes time — time that most state schools simply do not have.
None of this means storytelling is useless. It means it is limited. And whilst it is one of many instructional strategies that can be used once or twice a term to add variety and spice up the curriculum, it cannot be the one and only method to use with students who will be sitting a high-stake national exam which does not involve storytelling. This is not because I don’t espouse or like this method; but, rather, because, as cognitive psychology teaches us, what one learns through storytelling doesn’t transfer to the tasks used to assess students in national examinations around the world!
Another example: The Finnish miracle… and the context everyone forgets
The same pattern shows up well beyond language teaching, and the way Finland has been held up for years – based on credible research studies – as proof that there is a single “better” way to teach is probably the clearest example of this problem. Finnish education has been praised, copied, packaged and exported as if it were a set of techniques rather than the product of a very particular country, culture and history. Although the research backing the effectiveness of Finnish education is absolutely credible, what tends to be ignored is that Finland is small, socially cohesive, linguistically homogeneous, and built on unusually high levels of trust in teachers, low child poverty, strong welfare support, and minimal inequality between schools. Teachers are highly selected, highly trained, and enjoy professional autonomy that would be politically impossible in many systems. Classrooms operate in a context where behaviour, attendance, parental support and social safety nets are not constant battles. To point at Finnish outcomes and say “do this and you will get the same results” is to ignore all of that… and yet this is exactly what happens. The method is praised, the context quietly disappears, and what was once careful comparative research turns into a convenient story that sounds great in conferences, policy documents and consultancy brochures, but tells us very little about what will actually work in crowded, exam-driven, high-stakes state school systems elsewhere.
Conclusions
Storytelling has value. Explicit grammar has value. Input has value. Practice has value. EPI has value. But when any of these is turned into the answer, backed by selective readings of research and pushed hard by snake-oil salesmen who have something to sell, teachers should stop, breathe, and ask the very dull questions in the table below.
Research does not give us silver bullets. It gives us boundaries. When those boundaries disappear, what we are left with is not innovation or evidence-based teaching, but marketing dressed up as science. And frankly, we should be tired of that by now…
You must be logged in to post a comment.