The top 10 foreign language learning research areas MFL teachers are most interested in

images (5)

When I started writing my blog I did not expect I would get many readers. I started it much in the same spirit as one starts writing a journal; as a way to reflect on my practice and on my beliefs about language learning. When, however, people from all over the globe started to contact me requesting to know more about specific areas of foreign language acquisition and pedagogy and “Six useless things foreign language teachers do” got more than 9,000 readers in one night, it finally hit me: there are keen and reflective classroom practitioners out there who do want to know more about L2 acquisition theory, neuroscience and L2 research than what they were taught on their teacher training courses or during a few hours of CPD by educational consultants.

In actual fact, the reasons why I decided to embark on a Master’s course in Applied Linguistics after two years of UK comprehensive-school teaching refer mainly to my deep dissatisfaction with the way I was taught how to teach on my teacher training course (PGCE). My PGCE tutors were great, do not get me wrong. However, on my teacher training I was basically quickly shown a few tricks of the trade – mostly through videos or demo lessons with the ‘perfect’ classes – equipped with a few ‘lesson templates’ and sent off to my teaching practice schools.

No need to tell you what happened there; you have all been through that nerve-racking baptism of fire; you have all come across the really motivated and inspirational teachers who tried their best to support you and the demotivated and disgruntled ones who tried to discourage you. The outcome: I learnt more tricks of the trade, but not a shred of understanding of how the human brain acquires languages; not a hint of what research says sound MFL pedagogy looks like; I left my last teaching practice placement with glimpses, intuitions of what may work and what may not, totally unsupported by a theory of learning or research evidence.

Things did not improve much in my first school as an NQT. More trial and error; more tricks of the trade; more useless training sessions; still not a sound pedagogic framework, a theory that I could refer to, rooted in neuroscience. I had to use quite a lot of my savings to finally get that framework and that research-based knowledge from some of the greatest names in EFL pedagogy and research at the Reading University Centre for Applied Language Studies (CALS). Costly, but worth every penny!

Yet many of the colleagues I have worked with in 25 years of MFL teaching firmly believed that a good MFL teacher does not need to know about theory or research. I remember how one of them used to refer to the applied linguistics handbooks that I used to read during my frees as ‘silly books’. An understandable attitude considering how inaccessible many researchers have made their often very useful findings to classroom teachers with their convoluted jargon and obscure statistics.

A fairly recent survey carried out by my former PhD supervisor, Professor Macaro of Oxford University, however, found that although only 3% of the teachers interviewed found research accessible, 80% of them were actually interested in what research has to say about language acquisition and pedagogy. The following are the top ten areas of research Dr Macaro’s informants identified as most useful. Please note that the sample was not huge – only about 100 – and that the people who filled in the questionnaire were Heads of Department, i.e. very experienced teachers. The ranking is based on the means of the score each topic area received (1 being very useful and 4 being not at all useful):

  • Vocabulary acquisition74 % of the teachers found this topic very useful
  • How the grammar rules of the language are best learnt 73 % of the teachers found this topic very useful
  • Motivation68 % of the teachers found this topic very useful
  • How learners make progress with language learning58 % of the teachers found this topic very useful
  • Differences amongst learners (e.g. age; gender)53 % of the teachers found this topic very useful
  • Speaking51 % of the teachers found this topic very useful
  • How the brain stores and retrieves language58 % of the teachers found this topic very useful
  • KS4 (lower intermediate) research37 % of the teachers found this topic very useful
  • Writing 39 % of the teachers found this topic very useful
  • KS3 (beginner) research 29 % of the teachers found this topic very useful

What is interesting is the absence from the above list of two topics that I have written about and seem to have been very popular amongst my readers, i.e. listening and reading research. In fact, of Dr Macaro’s informants only 25 % were interested in listening and 27 % in reading.

What areas of research are you most interested in? I would love to have your input!

Advertisement

Think-aloud techniques – How to understand MFL learners’ thinking processes

download (3)

A fair amount of MFL teachers’ daily frustration relates to their students’ underachievement or apparent lack of progress. On a daily basis you hear your colleagues or yourself complain about student X or Y ‘not getting it’, making the same mistakes over and over again, writing unintelligible essays or speaking with shockingly bad pronunciation. The most undertaking and caring teachers often act on these issues by encouraging the students to revise and work harder; providing them with extra practice and scaffolding it; devising some remedial learning program involving some degree of learner training, and engaging their entourage to get some support. However, something crucial goes often amiss: how does one know what the problem REALLY is?

Yes, observing the students’ behaviour and analyzing their output more frequently and closely than usual does help, but it is not enough to get the full picture, and the teacher usually ends up focusing on the usual culprits, i.e. ‘laziness’, indolence, lack of motivation, low aptitude for language learning, etc. But could something else, something less visible that occurs deep inside their brains which we fail to notice and understand be the root cause(s) of the observed problem? Could those factors actually determine the alleged ‘laziness’ or ‘lack of motivation’? Difficult to say by simply asking students questions in a survey or interview or by observing their behaviour in lessons. Hence the importance to ‘get into our students’ heads’ to probe their mind in search of clues as to what it is that is hindering their performance and progress. But how do we do that?

There are indeed techniques that were developed by social scientists in order to tackle the limitations of traditional enquiry tools such as observations, questionnaires and interviews. They include a set of research techniques referred to in the literature as concurrent and retrospective think-aloud protocols. These techniques truly allow us to get into our students’ thinking processes and reconstruct the way their brains go about executing the tasks we engage them in in lessons.

Every time I have used these techniques, whether in the context of ‘proper’ research studies (e.g. one funded by OUDES – Oxford University Department of Education, in Macaro, 2001) or in my role as a classroom teacher, I was amazed at how many presumptions I had made about my students’ ways of processing the language were wrong and how right my mentor in the field of Learner Strategy Research (OUDES’ Professor Macaro) is when he states that most of our students’ issues do not stem from low IQ or language aptitude but from poor learning strategy use.

What are think-aloud techniques?

Think-aloud techniques require informants (your students if you are a teacher) to verbalize what is happening in their brain (working memory) as they execute a task. In this case we call them concurrent think alouds. If we ask them to reflect on their thought processes retrospectively – after the task has been executed – we refer to them as retrospective think alouds. Obviously, since when dealing with speaking tasks, concurrent think alouds cannot be used, retrospective think-alouds can be very helpful in investigating our students’ issues in oral language production.

The objectivity and validity of these tools for formal ‘scientific’ research have been questioned by me in previous blogs; but the use I am advocating here is not aimed at data one would want to analyze so as to extract from them universal truths to inform educational policies or changes in pedagogy. Rather, I recommend them as useful enquiry tools to obtain useful qualitative data to understand our students’ learning problems. As such, these techniques can be very useful indeed.

Most useful models of language processing were obtained thanks to think-aloud techniques. The most famous of them is surely the Hayes and Flower model of writing that I discussed in a previous blog, which has been since the 80’s the most widely used framework for mapping L2-student writers’ cognitive processes (see my previous blog on writing processes). On this model much current writing pedagogy and research has been based. This is a great example of how think-aloud protocols have affected the way we teach.

Pure and hybrid models of application

‘Pure’ concurrent and retrospective think-alouds are carried out with very little intervention on the part of the teacher/researcher. The students are asked to execute a task and whilst s/he verbalizes his/her thoughts (in a stream-of-consciousness fashion), the teacher sits somewhere behind him/her in order not to be seen (so as to minimize any possible researcher-effect). I used this technique mostly for writing in an attempt to understand what caused my students’ errors and compensation strategies (avoidance, coinage, etc.), to gain an insight in their use of resources (e.g. dictionaries) and find out how I could improve that. The reader should note that the presence of the teacher/researcher is important for the reason that he/she may want to note down key moments in the think-aloud where he/she may want to know more about what was going on in the informant’s head and may need to ask more questions retrospectively. For this purpose it may be useful to film the student and ‘show’ him/her – on video – the point(s) in the think-aloud you want to ask about, as a memory retrieval cue.

In the ‘Hybrid’ think-aloud model, the teacher steps in asking, probing using questions such as ‘why are you doing this?’ , ‘Can you tell me more about this?, ‘Why are you making this assumption?’. I find this very useful in order to tap into not just the current processes being verbalized but any other process happening concurrently which is not being verbalized. Obviously, one cannot guarantee that the information the student will provide about cognitive processes that are not in his/her focal awareness will be necessarily objective and reliable, but the process will yield a lot of useful data. In one of my studies, for instance, which focused on writing skills and error-making, had I not interrupted the students think-aloud/ ‘stream of consciousness’ I would have not found out why they did not notice the mistakes they were making, even though they had the declarative knowledge necessary to correct them.

Two or more of the above techniques can be used synergistically to support each other, the second set (the ‘hybrid’ model) usually following the first. This synergy usually yields richer and more reliable data.

Of all of the above think-aloud techniques, retrospective think-aloud  are the least reliable, as the students are likely to have ‘lost’ (forgotten) most of the information in their subsidiary awareness as well as part of the information in their focal awareness. However, as mentioned above, they are the only way to explore our students’ thinking processes when investigating learner speaking. To maximize their power, one should implement the tactic quickly touched upon earlier: using a video or recording of the speaking session as a retrieval cue for the student’s recall of his own processes.

A very useful tip: before implementing any of the above techniques, one should model the to-be-used think-aloud technique to the students and give a chance to practise it using a warm-up tasks similar to the one they are going to be engaged in soon.

Other benefits of think-aloud techniques

I have touched upon the benefits of using think-alouds in terms of enhancing our understanding of our learners – which will inform our teaching of the target student or group of students. However, there are other benefits which have the potential to more directly impact our students’ learning: the metacognition-enhancing effect of involving them in reflecting on their own learning. Through think-aloud it is not just us who ‘get into’ their heads; it is also and above all them exploring their own cognition. In this respect, think-aloud techniques involving introspection can be very valuable indeed, especially when the questions asked by the teacher/researcher drive them as deep down as possible into their own cognitive processing.

In conclusion, think-alouds can be very powerful tools to understand how students’ minds process language tasks and learn. A good teacher is ultimately also a researcher, and the formative data he/she can get from think-alouds can support him/her very effectively in his/her effort to obtain as much formative data as possible. Think-aloud techniques do not require a lot of training, are not too time-consuming and can be applied to every single aspect of teaching and learning. More importantly, in my experience, they yield data which one cannot obtain by any other means of enquiry. In this lies their value to any self-reflective teacher. Their use in my own practice has definitely made me a better teacher and, more importantly, has made every single one of the students whom I have involved in think-alouds, a more self-reflective and generally metacognizant learner.

Ten reasons for taking foreign language teaching research with a pinch of salt

download (1)

Every so often, whenever a government, lobby and educational or business establishment want to persuade us of the value of changing our current instructional approach or taking on a new initiative, we are presented with some new ground-breaking research findings from some study which seems to support their case for the envisaged innovation. In what follows, I will concisely list and discuss the main shortcomings common to a lot of educational studies, which seriously undermine their validity and should give us some reasons to be skeptical about their claims.

One small caveat, before we proceed: I am a strong believer in the importance of staying open to learning and innovation, but I also believe that many teachers are insufficiently conversant with research methodology and procedures, which makes them more ‘vulnerable’ to sensationalist research claims. This article attempts to address such knowledge gaps.

  1. Use of verbal reports (e.g. questionnaires, interviews, concurrent/retrospective think-aloud reports)

Scores of books and academic-journal articles have been published which argue against the use of verbal report data to draw any objective conclusions about a phenomenon, hypothesis or educational methodology being investigated. Why? Because they do not yield direct, objective data but merely subjective interpretations/reconstructions of events by teachers or learners and their perceptions and opinions about self- and other-phenomena inaccessible through objective means. Imagine, for instance, what a group of disgruntled teachers – the negative ‘clique’ in the staffroom – would write in an anonymous survey about their senior leadership team. Objective statements?

Retrospective verbal reports, whereby the learners reconstruct a posteriori their thought processes during the execution of a task they have just carried out are also unreliable as they will not capture automatic mental operations (which by-pass consciousness) occurred during performance and will be incomplete (due to working memory loss).

The practice of using three or more different forms of verbal reports (triangulation) does strengthen the validity of the data; however, because of the huge logistic effort that using so many data elicitation procedures involve, very few studies, if any, implement it.

What is surprising is that, despite their widely-acknowledged subjectivity and unreliability, verbal reports, especially questionnaires, are still widely used in educational research and their findings make it to the headlines of reputable newspapers and social media and end up seriously affecting our professional practice!

  1. Use of observational data

Observational data are also ‘tricky’, even when the phenomena observed are recorded on video. This is because when the researchers analyze a video of a lesson, for instance,  they need to code each observed student/teacher behaviour under a category, a label (e.g. ‘recast’, ‘request for help’, ‘explicit correction’, ‘critical attitude’, etc.). These categories can be quite subjective and are vulnerable to bias or manipulation on the part of the researcher. Once created a coding scheme, the subjectivity issues are compounded by the fact that – if more lessons were recorded –  that scheme must be applied in the analysis of every single video. How can one be sure that the coding scheme is used objectively and correctly?

The issues arising from the subjectivity of ‘coding’ can be more or less effectively addressed by having several independent coders working on the same video (inter-coder reliability procedures). However, being time-consuming, few studies do it and when the do it, they use two coders only because it is easier to resolve any disagreement.

  1. Opportunistic sampling

To test the superiority of a new methodology over another, you need to compare two groups which are homogenous. The group receiving the treatment will be your experimental group and the other one your control group. A sample should be randomized and like should be compared with like. However, in educational research it is difficult to randomize and to find two schools or groups of individuals that are 100 % equivalent.

Comparing the effect of an independent variable (e.g. a new instructional approach) in ten schools in the same Local Education Authority, is not a valid procedure because it presumes that they are identical at some level. No two schools are the same – even if they are located in the same neighbourhood – and the initiative being tested (the independent variable) will be affected by many contextual and individual differences.

  1. The human factor

This is possiby the most important threat to the validity of educational research. Every teaching strategy, tool or methodology is going to be affected by the teacher who deploys it, the learners on the receiving end and by the interaction between the two (the ‘chemistry’). Not to mention the fact that, as I have experienced first-hand, not all the administrators/teachers involved can be relied on to do exactly as instructed by the researchers. Consequently, it is difficult to dissociate the effect of the specific ‘treatment’ the experiment involved from the human factor. This problem is related to and compounded by another issue: the ‘researcher effect’

  1. The researcher effect

When you implement a new initiative or instructional approach you are majorly sensitizing to it everyone involved. You may generate enthusiasm, indifference, anxiety or even resentment in the teacher/student population. The negative or positive emotional arousal in the informants will create an important source of bias. Knowing (and it is very difficult to hide it in educational research) that you are part of an experiment will inevitable affect your behavior.

  1. The use of multi-traits evaluative scales and other proficiency measures

Studies investigating the impact of a methodology on L2 learner proficiency use multi-traits assessment scales to evaluate students’ performance in speaking and/or writing. Forty years of use of such scales in L2 research have shown that the vast majority of these, when used with students of fairly similar levels of proficiency and applied by more than two independent assessors – and often even with two -, do not yield statistically significant inter-rater reliability scores (i.e. the raters do not come up with assessment scores which are close enough to be statistically valid). This has been documented by several studies. Hence, most studies use only two raters or, often, no rater at all thereby undermining the validity of their findings.

An assessment scale, in order to be valid (and fair), must yield relatively high levels of inter-rater reliability (as obtained by using at least three independent assessors). We have all experienced disappointment and disbelief when we find out that our predicted-A* A-level students are awarded ‘B’ or even ‘C’ grades by an Examination board. But when one looks at the assessment scales they use, vague and ‘sketchy’ as they are and in view of the lack of serious inter-rater reliability procedures, it is not surprising at all.

There are even more problems with other measures used to asses written performance (e.g. T-units; Error counts, etc.) which I will not even go into. It will suffice to say that they are very commonly used and their reliability is highly questionable.

  1. The research design

A typical research design adopted in educational research is a pre-test / post-test design. For instance, imagine a study where a school tries a new instructional approach with 70 of their 150 year seven students. Both are given a test before the ‘treatment’ and another (similar) test at the end of it to see if there are any improvements. They find that there are indeed significant improvements. Problem: the second test is only a snapshot of the student performance. How do we know that it wasn’t because of that particular test (type or content) or other surrounding variables? Truth is that this kind of design is cheaper, logistically more manageable and less time-consuming. A better design would be a repeated-measure test design where there are several tests throughout the year (which would also control for learner maturation, that is the extent to which the observed improvements are actually due to the ‘treatment’ and not to developmental factors).

  1. Significance tests

Example: When comparing the essay or oral performance scores obtained by the two groups under study (the experimental and the control group) one usually performs a test (called t-test) which compares the means of the scores obtained by the two groups. The test will yield a score on which the researcher will perform a significance test to verify that the relationships between the two scores is probable enough to be statistically significant. That will finally tell you if your treatment has been successful or not, your hypothesis proven or disproven.

However, what it is interesting is that not all the significance tests normally used in research will give you the same result; one test may give you a positive verdict whereas four or five of the others will not. Guess which significance test results do researchers normally publish?

  1. Lack of transparency

Many studies are not 100% transparent about all of the above, especially when it comes to inter-rater reliability procedures and score. More than often, they will not tell you which significant tests they failed, they will only tell you the one they ‘passed’.

  1. The generalizability of the findings and replicability of the study

This is the most crucial issue of all because after all, what governments or international agencies do when they quote a piece of research is state that an initiative/intervention has worked in twenty or thirty schools around the country and thus should be implemented in all schools. And if the government happens to be the American or British one, it may spread to other countries, too. The question is: are the schools where the experiment took place – their teachers, administrators, students and other stakeholders – truly representative of the whole country, continent, the whole world? In my experience, more than often, the research findings have low generalizability power.

These are only 10 of the 25 reasons I brainstormed prior to writing this article as to why one should be skeptical about much educational research and about any imposed theory of or instructional approach to foreign language teaching instruction based on it. The above does not rule out the existence of sound and credible educational research within and outside the realm of foreign language learning and acquisition. There are in fact several examples of it. My main point, ultimately, is that educational research data may yield rich and highly informative data; however such data are often not as reliable and generalizable as they are made to be by the governments or establishments who use them to support their political or economic agendas.

This article does not intend to incite the reader against change or innovations. Not at all. Its aim is to raise teachers’ awareness of some of the flaws in research design and procedures common to many studies carried out to-date. Such awareness may prompt them to look at research in the future in a more ‘savvy’ and discerning way and to be more selective as to what they take on board and incorporate into their professional practice. Openness to change is a marker of a growth mindset, but the ‘blind’ embracing of any initiative claimed to be supported by unverified ‘research’ is unethical in a profession like teaching, where the cognitive development and welfare of our children is at stake.

In the last thirty years we have witnessed the implementation of great educational initiatives and innovations which have benefitted teachers and students (the K-stage 3 strategy, for instance, in the 90s, and Assessment For learning). I have seen others, however (e.g. Learning styles and Multiple intelligences), which were not only rooted in ‘phony’ theory and research, but also, in my view, wasted a lot of teacher and student time.