Think-aloud techniques – How to understand MFL learners’ thinking processes

download (3)

A fair amount of MFL teachers’ daily frustration relates to their students’ underachievement or apparent lack of progress. On a daily basis you hear your colleagues or yourself complain about student X or Y ‘not getting it’, making the same mistakes over and over again, writing unintelligible essays or speaking with shockingly bad pronunciation. The most undertaking and caring teachers often act on these issues by encouraging the students to revise and work harder; providing them with extra practice and scaffolding it; devising some remedial learning program involving some degree of learner training, and engaging their entourage to get some support. However, something crucial goes often amiss: how does one know what the problem REALLY is?

Yes, observing the students’ behaviour and analyzing their output more frequently and closely than usual does help, but it is not enough to get the full picture, and the teacher usually ends up focusing on the usual culprits, i.e. ‘laziness’, indolence, lack of motivation, low aptitude for language learning, etc. But could something else, something less visible that occurs deep inside their brains which we fail to notice and understand be the root cause(s) of the observed problem? Could those factors actually determine the alleged ‘laziness’ or ‘lack of motivation’? Difficult to say by simply asking students questions in a survey or interview or by observing their behaviour in lessons. Hence the importance to ‘get into our students’ heads’ to probe their mind in search of clues as to what it is that is hindering their performance and progress. But how do we do that?

There are indeed techniques that were developed by social scientists in order to tackle the limitations of traditional enquiry tools such as observations, questionnaires and interviews. They include a set of research techniques referred to in the literature as concurrent and retrospective think-aloud protocols. These techniques truly allow us to get into our students’ thinking processes and reconstruct the way their brains go about executing the tasks we engage them in in lessons.

Every time I have used these techniques, whether in the context of ‘proper’ research studies (e.g. one funded by OUDES – Oxford University Department of Education, in Macaro, 2001) or in my role as a classroom teacher, I was amazed at how many presumptions I had made about my students’ ways of processing the language were wrong and how right my mentor in the field of Learner Strategy Research (OUDES’ Professor Macaro) is when he states that most of our students’ issues do not stem from low IQ or language aptitude but from poor learning strategy use.

What are think-aloud techniques?

Think-aloud techniques require informants (your students if you are a teacher) to verbalize what is happening in their brain (working memory) as they execute a task. In this case we call them concurrent think alouds. If we ask them to reflect on their thought processes retrospectively – after the task has been executed – we refer to them as retrospective think alouds. Obviously, since when dealing with speaking tasks, concurrent think alouds cannot be used, retrospective think-alouds can be very helpful in investigating our students’ issues in oral language production.

The objectivity and validity of these tools for formal ‘scientific’ research have been questioned by me in previous blogs; but the use I am advocating here is not aimed at data one would want to analyze so as to extract from them universal truths to inform educational policies or changes in pedagogy. Rather, I recommend them as useful enquiry tools to obtain useful qualitative data to understand our students’ learning problems. As such, these techniques can be very useful indeed.

Most useful models of language processing were obtained thanks to think-aloud techniques. The most famous of them is surely the Hayes and Flower model of writing that I discussed in a previous blog, which has been since the 80’s the most widely used framework for mapping L2-student writers’ cognitive processes (see my previous blog on writing processes). On this model much current writing pedagogy and research has been based. This is a great example of how think-aloud protocols have affected the way we teach.

Pure and hybrid models of application

‘Pure’ concurrent and retrospective think-alouds are carried out with very little intervention on the part of the teacher/researcher. The students are asked to execute a task and whilst s/he verbalizes his/her thoughts (in a stream-of-consciousness fashion), the teacher sits somewhere behind him/her in order not to be seen (so as to minimize any possible researcher-effect). I used this technique mostly for writing in an attempt to understand what caused my students’ errors and compensation strategies (avoidance, coinage, etc.), to gain an insight in their use of resources (e.g. dictionaries) and find out how I could improve that. The reader should note that the presence of the teacher/researcher is important for the reason that he/she may want to note down key moments in the think-aloud where he/she may want to know more about what was going on in the informant’s head and may need to ask more questions retrospectively. For this purpose it may be useful to film the student and ‘show’ him/her – on video – the point(s) in the think-aloud you want to ask about, as a memory retrieval cue.

In the ‘Hybrid’ think-aloud model, the teacher steps in asking, probing using questions such as ‘why are you doing this?’ , ‘Can you tell me more about this?, ‘Why are you making this assumption?’. I find this very useful in order to tap into not just the current processes being verbalized but any other process happening concurrently which is not being verbalized. Obviously, one cannot guarantee that the information the student will provide about cognitive processes that are not in his/her focal awareness will be necessarily objective and reliable, but the process will yield a lot of useful data. In one of my studies, for instance, which focused on writing skills and error-making, had I not interrupted the students think-aloud/ ‘stream of consciousness’ I would have not found out why they did not notice the mistakes they were making, even though they had the declarative knowledge necessary to correct them.

Two or more of the above techniques can be used synergistically to support each other, the second set (the ‘hybrid’ model) usually following the first. This synergy usually yields richer and more reliable data.

Of all of the above think-aloud techniques, retrospective think-aloud  are the least reliable, as the students are likely to have ‘lost’ (forgotten) most of the information in their subsidiary awareness as well as part of the information in their focal awareness. However, as mentioned above, they are the only way to explore our students’ thinking processes when investigating learner speaking. To maximize their power, one should implement the tactic quickly touched upon earlier: using a video or recording of the speaking session as a retrieval cue for the student’s recall of his own processes.

A very useful tip: before implementing any of the above techniques, one should model the to-be-used think-aloud technique to the students and give a chance to practise it using a warm-up tasks similar to the one they are going to be engaged in soon.

Other benefits of think-aloud techniques

I have touched upon the benefits of using think-alouds in terms of enhancing our understanding of our learners – which will inform our teaching of the target student or group of students. However, there are other benefits which have the potential to more directly impact our students’ learning: the metacognition-enhancing effect of involving them in reflecting on their own learning. Through think-aloud it is not just us who ‘get into’ their heads; it is also and above all them exploring their own cognition. In this respect, think-aloud techniques involving introspection can be very valuable indeed, especially when the questions asked by the teacher/researcher drive them as deep down as possible into their own cognitive processing.

In conclusion, think-alouds can be very powerful tools to understand how students’ minds process language tasks and learn. A good teacher is ultimately also a researcher, and the formative data he/she can get from think-alouds can support him/her very effectively in his/her effort to obtain as much formative data as possible. Think-aloud techniques do not require a lot of training, are not too time-consuming and can be applied to every single aspect of teaching and learning. More importantly, in my experience, they yield data which one cannot obtain by any other means of enquiry. In this lies their value to any self-reflective teacher. Their use in my own practice has definitely made me a better teacher and, more importantly, has made every single one of the students whom I have involved in think-alouds, a more self-reflective and generally metacognizant learner.

How to exploit the full learning potential of an L2 song in the language classroom

download (2)

How to exploit the full learning potential of a target language song in the L2 classroom

As Robert Lafayette wrote in his 1973 article ‘Creativity in the foreign language classroom’, ‘songs are often sung the day before a vacation, or on a Friday afternoon or when we have a few extra minutes’.

This resonates with my experience, and there is nothing majorly wrong with it – why not having a bit of fun for fun’s sake every now and then, especially when your students’ are tired or in festive mode? It can help create a nice buzz in the classroom, a sense of conviviality and breathes a bit of L2 culture into our lessons. And, who knows, some incidental learning might actually happen, with minimal preparation.

On the other hand, research does show that simply singing along to a song mindlessly, whilst being enjoyed by most students, doesn’t really do much in terms of learning enhancement. For instance, Carlsson (2015) found that, although the vast majority of her informants enjoyed singing songs, rather than benefiting from this activity in terms of pronunciation, some of them actually got worse in some problematic areas (e.g. ‘th’ in English) at the end of her experiment, whilst the majority made no progress at all

In 30 years’ experience I have observed many lessons in which classic or contemporary songs were used. However, I have rarely come out of those lessons feeling that the full learning potential of that song had been exploited. In fact, I often felt that very little was learnt at all in terms of the lyrics’ key vocabulary or structures.

This article attempts at providing a principled approach to the ‘linguistic’ exploitation of a song. This should not be taken, of course, as the only or best possible blueprint for exploiting a song as a learning enhancement tool. I am sure there are many other ways that I have not explored yet.

A nine-step framework for the exploitation of a song

Step 1: select the ‘right’ song

These are the most important principles one should heed when selecting a song for optimal learning enhancement:

(1) Comprehensible input – choose a song which you believe is linguistically accessible – with some support – to the target students.

(2) Flooded input – the song will ideally be ‘flooded’ with the target linguistic features, be them sounds, lexical items, and/or grammatical / syntactic structures. This is key.

(3) Linguistic relevance – select a song which is relevant to the linguistic goals of the curriculum, i.e. that contains lexis and grammar which is related to what the learning outcomes of the lesson and/or unit-in-hand are. Ideally the song should introduce, model, recycle or reinforce linguistic or culture features you have been teaching or planning to teach. It shouldn’t be a ‘pedagogic island’, as often happens, exposing students to language or other information that is not going to be revisited later on.

(4) Socio-cultural’ relevance and sensitivity – by ‘cultural’ here I do not mean the culture of the country, but rather the relevance to the sub-culture of the students they ‘belong’ to. For instance, if the group you are teaching is mainly composed of teen age rugby players ‘with an attitude’ you would not choose a romantic song stigmatized in their sub-culture as a ‘girly’ song. By the same token one must be careful not to choose a song whose lyrics and/or official Youtube video contain culturally insensitive material.

This is crucial when working in an international school or other multi-ethnic environments. It may be useful, before using a song in class, to play it to two or three students of the same age as and similar ability to the ones you are going to work on that song with. Their feedback might be a lifesaver!

(5) Surrender value – the song should contain vocabulary which is worth learning, i.e. that has high surrender value. This will include mainly high frequency vocabulary and phrases;

(5) Availability of relevant multimedia resources – it is practical to choose a song whose lyrics, L1 translation and video are available online and free. The lyrics available on the internet should always be checked thoroughly as they more than often contain spelling errors or small omissions.

(7) Memorability -The following are factors that usually affect the memorability of a song:

  • the lyrics are repetitive and patterned;
  • the music is ‘catchy‘;
  • it’s packed with sound devices such as allitterations, rhymes and pararhymes;
  • it is distinctive, i.e. there are specific features of the song (and/or in the video that accompanies the song) which make it stand out;
  • its content is socio-culturally and/or affectively relevant to your students. In this term, much consideration must be given to gender differences;
  • the speed and enunciation must allow the students to clearly hear the words;
  • the song tells a ‘story’ which is fairly linear and predictable;
  • the linguistic content is high frequency, which means that the chances of the students having encountered those words previously and of encountering them in the future, is higher

Step 2 – Pre-listening activities for schemata activation

In order to activate the learners’ prior knowledge and the language related to the themes and semantic areas the song taps into, the learners should be engaged in a series of tasks which, whilst recycling vocabulary they have already processed in previous lessons, engage them on some kind of reflection on the song’s themes. For instance, on a lesson centred on Kenza Farah’s song ‘Sans jamais de plaindre’, which deals with the theme of parent’s daily sacrifices for their children, in the first activity I staged (see my worksheet here) I asked the students to:

  1. Brainstorm and write down in French, working in groups of two, five sacrifices parents usually make for their children;
  2. Think about three people in their own families and list the sacrifices they have made in recent years to help them;
  3. List the qualities of the ideal father, mother and sibling.

Just as I have done in this lesson, this kind of activities should elicit language, in their execution, which is very relevant to or even equivalent to the one in the song.

In this phase you may also want to develop the all-important desire to listen. You could do this by:

(1) displaying a slideshow featuring photos of the singer and captivating images you will have found on the web which refer to the content of the song;

(2) showcasing lines of the song which are shocking, funny, witty or ‘cool’;

(3) playing on the classroom screen the most enticing parts of the song’s official videoclip on silent;

(4) (if they don’t know the singer) relating interesting facts about them that may arouse their curiosity;

etc.

Step 3 – Pre-listening activities to facilitate bottom-up processes during the in-listening phase

At this point, the teacher may want focus on facilitating the students’ understanding of the text through activities which involve working on the key lexis included in the song’s lyrics. These activities will involve semantic analysis of that lexis through split sentences activities, gapped sentences, odd one outs, matching exercises, etc. In the example given above, for instance, I took key sentences from the lyrics and recycled them (see the second page in the hand-out) through five vocabulary building, reading skills and semantic/syntactic analysis activities which focused on lexis, morphology, and syntax.

Step 4 – Listening to the song for pleasure

You should let the students listen to the song for pure enjoyment the first time around; then ask them to do any in-listening tasks.

Step 5 – Recognizing and noticing

Get the students to listen to the song again. This time ask them to note down any words they recognize and any words they don’t know but they noticed (maybe because they kept re-occurring) – spelling doesn’t matter.

After the students have jotted down the words, get them to pair up with one or more peers to compare notes.

You could then ask the students to throw the words at you and you could list them on the board, explaining their meaning in the L2 or translating them in the L1.

Finally, ask them what they think the song is about (this can be done in the L1 with less proficient groups)

Step 6 – Promoting selective attention and further noticing

At this stage the learners are given a gapped version of the lyrics of the song, where the words are provided aside. In order not to overload the students, I usually place a gap every two or three lines.

You will gap the words or chunks you want the students to pay particular attention to, because of their linguistic, semantic or cultural value.

If I want to emphasize a specific sound pattern I usually draw the students’ attention to it by removing words that rhyme, chime or alliterate with one containing that sound. After listening to the song three or four times, show them the complete version of the lyrics on the screen and ask to check and correct/fill in any missing gap

Step 6 – Working on specific phonemes

Explicit learning

After the students have filled in all the gaps, produce or play a recording of a specific sound that you know they struggle with (e.g. [œ]), then play the song again and ask them to  highlight/circle the words which contain that sound. Do the same with other key phonemes, making sure that they use a different highlighting/coding system for each sound. Then play the song again asking them to focus on the specific letters they highlighted.

Inductive learning

Write on the whiteboard two or three combination of letters (e.g. diphtongs) or syllables which recur a few times in the target song. Then ask your students, working in pairs, to underline all of the occurrences of the target item in the lyrics of the song. Finally, ask them to listen  to the song and work out how those letter combinations/syllables are pronounced in the target language.

Step 7 – Working on segmentation skills

Segmentation, i.e. the ability to identify words boundaries is a key micro-listening skill.

1. Break the flow – Give your students a version of a portion of the song’s lyrics (e.g. the first two stanzas) from which you eliminated the spaces in between words. Their task is to listen to the song and mark with a line the breaks you deleted;

2. Spot the intruder – insert as many small function words (e.g. articles and prepositions) as you can in between the words in the lyrics and ask the students to delete the ones they don’t hear when they listen to the song;

3. Complete the beginning / endings – delete the beginnings and/or the endings of every single word in a stanza / section of the target song. The students will have to complete the ‘mutilated’ words.

Step 8 – Working on general GPC (grapheme-phonemes correspondence)

GPC refers to the print-to-sound correspondence in a language. You could do any of the following activities, depending on your focus:

(1) eliminate all consonants or vowels from a few words or even lines of a song;

(2) eliminate specific syllables;

(3) jumble up the letters in specific words;

(4) split words in half (one or two per line max);

(5) (in French) underline the endings of specific words and ask your students to underline which letters are silent;

(6) write a few words on the whiteboard and ask your students to listen to the song and spot as many words in the song that rhyme with them;

etc.

Step 9 – Reading comprehension : Lexical level

At this point get them to work on reading comprehension through deep processing activities such as the following classics

(1) Word hunt – the students are provided with a list of lexical items / chunks in the L1 and the students are tasked with finding their L2 equivalent in the lyrics;

(2) Categories – identify the key semantic fields the key words in the song refer to, e.g. relationships, weather, time, and ask the students to spot and note down as many words as possible in the lyrics under those headings.

(3) Near synonyms/antonyms – give the students a set of phrases/sentences which are near-synonyms of phrases/sentences found in the song and ask to match them up

(4) Chronological ordering – provide a list of main points from the song in random order and ask the students to arrange them in the same order as they occur in the song

Step 10 – Grammar level

When the learners have been acquainted with most of the vocabulary and the intended meaning of the song, it will be easier for them to process the grammar. Thus, at this stage one can get the students to engage with this level of the text by asking them to:

  1. identify specific linguistic features. For instance, give them a grid with metalinguistic labels as heading, e.g. Adjectives, Verbs, Nouns, Prepositions, Connectives and ask them to find in the lyrics as many words that refer to those categories;
  2. work on grammatical dichotomies within a specific category: regular vs irregular adjectives, masculine vs feminine nouns, imperfect vs perfect tense. The students must note down items from the song that falls under either category;
  3. ask metalinguistic questions (e.g., in French or Spanish: why is an imperfect used here rather than a perfect tense?; which form of the verb is ‘Comieron?’);
  4. rewrite a set of sentences lifted from the lyrics incorrectly, deliberating making a grammar mistake your students usually make and ask them to compare it to the original version in the song and correct it;

Step 11 – Syntactic level

  1. Write the literal L1 translation of a few sentences in the song where the L1 sentence structure is different from the L2’s. The task: for the students to notice the differences between the two languages and extrapolate the rule.
  2. Write a sentence structure using a shorthand/symbols you have used your students to, e.g.: SVOCA (subject + verb + object + complement + adverbial) or “Time marker + personal pronoun + verb + preposition + article + noun’; your students are tasked with identifying sentences that reflect that structure.
  3. Write a list of subordinate-clause types your students are familiar with on the whiteboard, e.g. : time clauses, final clauses, modal clauses, etc. Then ask your students to identify as many clauses in the song which refer to those types.

Step 12 – Meaning building and discourse reconstruction tasks

After all the work on lexis, grammar and syntax, the students should be able to approach the meaning level – arguably the most important – with much confidence. Once removed the lyrics and other worksheets you have used with so far, you could stage any of the following classics:

1. Jigsaw reading / listening – give the students a jigsaw version of the song lyrics and ask them, working in groups of 2 or 3 to rearrange it in the correct order. Then the students listen to the song and confirm or rearrange;

2. ‘True or false’ tasks (as reading or listening comprehension)

3. ‘Comprehension questions’ tasks (as reading or listening comprehension)

4.  Summarising content in the L1 or L2

5. Bad translation (pair-work) – provide a translation of the song lyrics which contains a number of fairly obvious mistakes. The students are tasked, under timed conditions, with spotting and correcting the mistakes

Step 11 – Enjoy the song

Now that you are confident the student understand the meaning of the song and most of the words in it, get them to sing along.

Step 12 – Recycling and consolidating

This step is crucial, as you do want to secure a strong retention of the linguistic material your students have processed. Here are some tasks you could use:

1. Spot the differences – doctor the lyrics by making a few grammatical or lexical changes to the song and ask the students to identify them. The students will have no access to the original text; they will have to do this from memory.

 2. Gapped lyrics – the students are tasked with filling the gaps from memory

3. Disappearing text – The teacher writes a stanza on the blackboard. Usually the text should contain about 50 or 60 words, but this depends on the ability of the class. She asks a learner or two to read it. Then she rubs out some of the words – it is usually best to rub out function words like a, the, in, of, I, he, etc. at the beginning. Then she asks another learner to read it aloud. The learner must supply the missing words as they read. Then some more words are rubbed out, another learner reads, and this continues until there is nothing at all on the blackboard, and the learners are saying the text from their memory. It is best not to rub out too many words each time so that many learners have a chance to read the text.

4. Mad dictation

Mad dictation is a dictation in which you alternate slow, moderate, fast and very fast pace. This is how it unfolds:

1 – Tell the students to listen to the text as you read it at near-native speed and to note down key words

2 – Tell them to pair up with another student and to compare the key words they noted down. Tell they are going to work with that person for the remainder of the task.

3 – Read the text a second time. This time read some bits slowly, some fast and some at moderate pace. The purpose of these changes in speed is to get the students to miss some of the words out as they transcribe

4 – The students work again with their partner in an attempt to reconstruct the text

5 – Read the text a final time, still varying the speed of delivery.

6 – The students are given another chance to work with their partner.

7 – They are now given 30 seconds to go around the tables and steal information from other pairs

5. Dictogloss – the students listen to the song twice, each time noting down as many words as they can. They they pair up with another student and reconstruct the lyrics together.
6. Guided summary – Give them a list of words/chunks taken from the song lyrics and ask them to write a summary of the song including those items.
7. Substitution task – Underline key items in the song lyrics and ask the students to rewrite the song replacing those items creatively but in a way which is grammatically correct and semantically plausible
8. Deep processing tasks – you will stage classic vocabulary building tasks eliciting deep processing such as odd-one out, categories, find the synonym/antonym, split sentence, ordering, etc.
9. Quizzes to ascertain how much has been retained in terms of vocabulary, grammar, meaning, etc.

10 – Thinking-about-learning tasks – these may include any of the following:

  • Reflecting on the value of using songs for learning – Ask them to reflect on how songs, based on what you have just done with them, can be valuable for language learning and ask for suggestions on how they could benefit by listening to them independently. You can follow this up by providing them with lists of singers/songs they might enjoy or by giving them the task to find a French band/solo artist they like (to share with the rest of the class in the next lesson);
  • Noting down what was challenging about the song and the tasks performed;
  • Making a list of the new items learnt and rank them in order of usefulness for real life communication or reading comprehension purposes.

Why the reliability of UK Examination Boards’ assessment of A Level writing papers is questionable

download

Often, our year 12 or Year 13 students who have consistently scored high in mock exams or other assessment in the writing component of the A Level exam paper, do significantly less well in the actual exam. And, when the teachers and/or students, in disbelief, apply for a remark, they often see the controversial original grade reconfirmed or, as it has actually happened to two of my students in the past, even lowered. In the last two years, colleagues of mine around the world have seen this phenomenon worsen: are UK examinations boards becoming harsher or stricter in their grading? Or is it that the essay papers are becoming more complicated? Or, could it be that the current students are generally less able than the previous cohorts?

Although I do not discount any of the above hypotheses, I personally believe that the phenomenon is also partly due to serious issues undermining the reliability of the assessment procedures involved in the grading of A Level papers by external assessors. The main issues refer to:

(1) the assessment rubrics used to grade the essays, which lend themselves, as I intend to show, to subjective interpretation by the assessors;

(2) the way raters form their judgement about the quality of an essay;

(3) the absence of an inter-rater reliability process – which compounds the previous issues.

1.Subjective interpretation of rubrics

1.1 The fuzziness of the terms ‘Fluency’ and ‘Communication’

The interpretation of what ‘fluency’ means is one of the most controversial issues in Applied Linguistics research. If one asks ten teachers what the word refers to, every single one of them will tell you they know. However, when it comes down to articulating what ‘good’ or ‘very good’ fluency involve and to giving criteria to measure it, each and every one of them will provide different criteria and measures. How do we know this? Several studies have been carried out which show that this is case (Chamber et al., 1995). Moreover, it will suffice to look at how greatly the measures of fluency vary across the plethora of analytical/holistic scales used in applied linguistics research to realize how ambiguous a concept ‘fluency’ is. As Bruton and Kirby (1987) put it:

“The word fluency crops up often in discussions of written composition and holds an ambiguous position in theory and in practice…Written fluency is not easily explained, apparently, even when researchers rely on simple, traditional measures such as composing rate. Yet, when any of these researchers referred to the term fluency, they did so as though the term were already widely understood and not in need of any further explication.” (p. 89)

Yet, rubrics used in the assessment of the writing component of the Edexcel AS level exams use ‘fluency’ as one of the criteria to attain the highest level of Quality of Language. This is taken from the writing assessment rubric on page 32 of the Edexcel specification:

“Excellent communication; high level of accuracy; language almost always fluent, varied and appropriate.”

If scholars and researchers around the world cannot agree on what this concept means how can teachers be expected to be able to do so? The solution? Edexcel could either eliminate the reference to fluency or provide a clear explanation of what it refers to, for teachers to understand what is expected of their students at the highest level of the Quality of Language scale.

Another issue is the use of the concept of ‘Communication’ in the above example from Edexcel. Communication is a very fuzzy concept, too, as it subsumes quite a wide range of skills as well as linguistic and sociolinguistic features. What does ‘Excellent communication” actually mean? Here, too, it would be useful for teachers to know what is meant by the examiners and what features would constitute a step up from ‘good communication’ (the criterion which characterizes the previous level).

1.2 What is a ‘simple’ and what is a ‘complex’ structure?  

Researchers have also found that teachers do not often agree on what constitutes a complex structure (Chambers et al., 1995). What may appear very complex to one teacher seems to appear moderately complex or relatively easy to another, depending on their own personal bias and experience. In the light of this ambiguity, one can see how the two top levels of the AQA ‘Complexity of language’ scale lend themselves to subjective interpretation.

  1. Very wide range of complex structures;
  2. A wide range of structures, including complex constructions;

How does one decide with absolute certainty when a structure is ‘simple’ or ‘complex’ or somewhere in the middle? Interestingly, the Edexcel specification lists in the Appendix (pages 72-74), all the structures an A Level candidate is expected to have learnt by the end of the course. Each time I go through them- despite an MA and a PhD in Applied Linguistics and 25 years of foreign language teaching – I find it difficult to draw the line between less and more complex structures. For instance, does Agreement count as a complex grammar structure? Some of my colleagues think it is not, whilst they think the subjunctive is; but I can think of contexts in which agreement rules are applied which the students find more challenging than the deployment of certain subjunctives rules. Moreover, say one produces set phrases (formulaic language) such as ‘Quoiqu’il en soit” (= whatever the case), which I teach as a ready-made ‘chunk’ to all of my students, before I even teach the subjunctive. Does it count as a complex structure, even though the students use it without knowing the grammar ‘behind’ it? And saying “One must use one’s professional judgment”, it’s not good enough, because that’s when we legitimize subjectivity!

1.3 When is language ‘rich’?

The Edexcel ‘Range and Application of language’ trait, a subset of the holistic scale used to assess the Research essay, contains the following criteria at the top two levels:

7–8 A wide range of appropriate lexis and structures; successful manipulation of language.

9-10 Rich and complex language; very successful manipulation of language.

We have already dealt with the issue of ‘complex language’. Now let us focus on the adjective ‘rich’? What constitutes ‘rich’ language? How does it differ from ‘A wide range of appropriate lexis and structures’? Although one can sense the difference between the two levels, it is not clear what A2 candidates need to write in their essays for the language to be considered as ‘rich’. Again, it would be helpful to obtain from Edexcel clear guidelines as to what constitute ‘rich language’ (e.g. how many and what kind of idioms one should use) so that one will not go about assigning a grade impressionistically.

1.4 Use of intensifiers in the rubrics to indicate progression

Let us go back to the example above from AQA:

5 (marks) – Very wide range of complex structures;

4 (marks) –  A wide range of structures, including complex constructions;

Considering that the word limit set by AQA is 250 words, how many ‘complex’ structures can an A level student ‘pack in’ in such a limited space so as to be considered as using a ‘very wide’ range of complex structures? Does the use of 10 different complex structures qualify as a ‘wide’ or a ‘very wide’ range? Does this encourage a student to artificially use as many ‘complex’ structures as possible potentially to the detriment of the message he/she is trying to convey in order to score a ‘5’?

This issue refers to an overuse, very fashionable in this day and age, in rubrics, of incrementally stronger intensifiers to define progression from one level to another; this looks intuitively right, and maybe it is, in theory, but in practice creates a lot of ambiguity along the way. Here is another example, from Edexcel, this time (A level specification, page 46), from the essay-organization rubric; the two statements below define the top levels of the taxonomy:

10–12 Organisation and development logical and clear.

13–15 Extremely clear and effective organization and development of ideas

I personally find it difficult to differentiate between ‘clear’ and ‘extremely clear’; when does ‘clear’ become ‘extremely clear’?. Moreover, doesn’t ‘effective organization’ mean ‘clear’ organization to most people, ‘clear’ meaning that the text is both cohesive and coherent? I believe that in order to be fair to teachers and students, Edexcel should provide several ‘extremely clear’ examples as to what constitute ‘clear’ and ‘extremely clear’ organization.

2. The exam format

This is an issue which refers to the Edexcel Research essay component. Let us look at the top descriptors for the scale “Understanding, reading and research”:

19–24 Good to very good understanding; clear evidence of in-depth reading and research.

25–30 Very good to excellent understanding; clear evidence of extensive and in-depth reading and research.

Considering that the set word limit is only 240 to 270 words, how can a student provide clear evidence of extensive reading and research? And what is meant by ‘extensive’, anyway?

3. General issues undermining reliability of essay-rater assessment

Research shows that there are issues which compound the problems already highlighted above. This issues have to do with findings as to the way essay raters go about grading essays, which have the potential to affect the objectivity of the assessment process.

3.1 The pre-scoring stage

It appears that during the reading of an essay, which the raters usually do in the pre-scoring stage, the raters are already forming a judgment which does not necessarily refer to the categories in the assessment rubrics. Although they do usually attempt to make their judgment fit the categories when awarding the marks it is difficult to dispel the positive or negative effect on the objectivity of the grading that that initial bias brings to bear on the assessment.

3.2 Idiosyncratic focus on specific categories

Some raters seem to focus on specific categories more than others. The type of category they focus can vary greatly from rater to rater. One rater, for instance, will focus on grammar, whilst another will concentrate on lexical choice or spelling. This is important, as the extent to which a rater focuses his attention onto grammar, for instance, may bias him/her negatively, in the case of an essay with quite a few grammatical mistakes but great content,towards that essay and lead him/her to be ‘harsher’, even when s/he tries to apply the assessment rubrics objectively.

3.3 Level of engagement with the rubrics during the assessment

Research also shows that some raters engage meticulously in the reading of the rubrics so as to apply them as accurately as possible, whereas others give them a much more superficial read and apply their own impressions. In the light of the first section of this article, an assessor ought to be as meticulous as possible in ensuring they apply the descriptors correctly, even when s/he is experienced in the use of the rubrics. How can one be sure that the rater assessing our students’ essays belong to the conscientious and meticulous sort rather than the more superficial kind?

Implications for A Level examination boards

The above issues highlight the importance of implementing measures to control for the threats to the reliability of the essay assessment process. Two measures can be undertaken:

(1) As already suggested above, the wording of the rubrics may be changed, in order to disambiguate the meaning of certain statements/criteria in the rubrics. As McNamara (1996, p.118) points out, “the refinement of descriptors and training workshops are vital to rating consistency”.

(2) Examination boards should train their examiners more frequently than they currently do in the use of assessment rubrics;

(3) Most importantly, Examination Boards should implement multiple-marking procedures whereby each essay is graded by two or more raters, who will, in the event of serious grading discrepancies – and of low inter-rater reliability – engage in a discussion to address the issues which cause them to disagree. As Wu Siew Mei in a brilliant study I could not locate the date of but found at http://www.nus.edu.sg/celc/research/books/relt/vol9/no2/069to104_wu.pdf  states,

“it is […] a good strategy to do multiple ratings where each script is rated by more than one rater and where there is a clear procedure for reconciliation of varied scores. However, such strategies are again limited by manpower availability and time constraints.”

Currently, (3) does not happen. This is a serious flaw in the current assessment procedures of UK examination boards in view of the subjectivity of the assessment scale descriptors they use and of other issues pointed out in the course of this article. Placing the onus of essay rating onto one marker only is unfair and unreliable. UK examination boards should act as soon as possible on this shortcoming, regardless of the extra costs, training and other issues which changing the current system would entail. After all, the grades our students obtain at A level can have huge repercussions on their university applications and their future in general. This consideration should come first.

Why asking our students to self-correct the errors in their essays is a waste of time…

Gianfranco Conti, Phd (Applied Linguistics), MA (TEFL), MA (English Lit.), PGCE (Modern Languages and P.E.)'s avatarThe Language Gym

images (2)

In this very concise article I will argue that involving our learners in Indirect Error Correction on its own is an absolute waste of valuable teacher and learner time. By Indirect Error Correction (henceforth IEC), I mean highlighting or underlining the errors in our students’ written pieces (with or without error coding), pass the essay back to our students who will make the corrections and pass it back to us for any necessary amendments to be made. In addition, some of us will ask the students to rewrite the whole essay incorporating the corrections.

It sounds like a very time-consuming activity!

The pedagogic rationale behind this approach seems pretty clear: the students get cognitively involved in the correction process. They are not just the passive recipients of the teacher’s correction but they are actually doing something about it. Moreover, by working on their mistakes they will become more aware of…

View original post 1,124 more words

Mapping out the foreign language writing process

images

 

In this article I take on the very difficult task of illustrating the cognitive processes that take place in the brain of a second language student writer as s/he produces an essay. Why? Because often, as teachers and target language experts, we forget how challenging it is for our students to write an essay in a foreign language. Gaining a better grasp of the thinking processes essay writing in a second language involves, may help teachers become more cognitively empathetic towards their students; moreover, they may reconsider the way they teach writing and treat student errors.

A caveat before we proceed: this article is quite a challenging read which may require some background in applied linguistics and/or cognitive psychology.

 

A Cognitive account of the writing processes: the Hayes and Flower (1980) model

Hayes and Flower’s (1980) model of essay writing in a first language is regarded as one of the most effective accounts of writing available to-date (Eysenck and Keane, 2010). As Figure 1 below shows, it posits three major components:

  1. Task-environment,
  1. Writer’s Long-Term Memory,
  1. Writing process.

Figure 1: The Hayes and Flower model (adapted from Hayes and Flower, 1980)

images

The Task-environment includes: (1) the Writing Assignment (the topic, the target audience, and motivational factors) and the text; (2) the Writer’s Long-term memory, which provides factual knowledge and skill/genre specific procedures; (3) the Writing Process, which consists of the three sub-processes of Planning, Translating and Reviewing.

The Planning process sets goals based on information drawn from the Task-environment and Long-Term Memory (LTM). Once these have been established, a writing plan is developed to achieve those goals. More specifically, the Generating sub-process retrieves information from LTM through an associative chain in which each item of information or concept retrieved functions as a cue to retrieve the next item of information and so forth.The Organising sub-process selects the most relevant items of information retrieved and organizes them into a coherent writing plan. Finally, the Goal-setting sub-process sets rules (e.g. ‘keep it simple’) that will be applied in the Editing process. The second process, Translating, transforms the information retrieved from LTM into language. This is necessary since concepts are stored in LTM in the form of Propositions (‘concepts’/ ‘imagery’), not words. Flower and Hayes (1980) provide the following examples of what propositions involve:

[(Concept A) (Relation B) (Concept C)]

or

{Concept D) (Attribute E)], etc.

Finally, the Reviewing processes of Reading and Editing have the function of enhancing the quality of the output. The Editing process checks that grammar rules and discourse conventions are not being flouted, looks for semantic inaccuracies and evaluates the text in the light of the writing goals. Editing has the form of a Production system with two IF- THEN conditions:

The first part specifies the kind of language to which the editing production

applies, e.g. formal sentences, notes, etc. The second is a fault detector for

such problems as grammatical errors, incorrect words, and missing context.

(Hayes and Flower, 1980: 17)

In other words, when the conditions of a Production are met, e.g. a wrong word ending is detected, an action is triggered for fixing the problem. For example:

CONDITION 1: (formal sentence) first letter of sentence lower case

CONDITION 2: change first letter to upper case

(Adapted from Hayes and Flower, 1980: 17)

Two important features of the Editing process are: (1) it is triggered automatically whenever the conditions of an Editing Production are met; (2) it may interrupt any other ongoing process. Editing is regulated by an attentional system called The Monitor. Hayes and Flower do not provide a detailed account of how it operates. Differently from Krashen’s (1977) Monitor, a control system used solely for editing, Hayes and Flower’s (1980) device operates at all levels of production orchestrating the activation of the various sub-processes. This allows Hayes and Flower to account for two phenomena they observed. Firstly, the Editing and the Generating processes can cut across other processes. Secondly, the existence of the Monitor enables the system to be flexible in the application of goal-setting rules, in that through the Monitor any other processes can be triggered. This flexibility allows for the recursiveness of the writing process.

Hayes and Flower’s model is useful in providing teachers with a framework for understanding the many demands that essay writing poses on students. In particular, it helps teachers understand how the recursiveness of the writing process may cause those demands to interfere with each other causing cognitive overload and error. Furthermore, by conceptualising editing as a process that can interrupt writing at any moment, the model has a very important implication for a theory of error: self-correctable errors occurring at any level of written production are not always the result of a retrieval failure; they may also be interpreted as caused by detection failure (failure to ‘spot’ a mistake). However, one limitation of the model for a theory of error is that its description of the Translating and Editing sub-processes is too general. I shall therefore supplement it with Cooper and Matsuhashi’s (1983) list of writing plans and decisions along with findings from other L1-writing Cognitive research, which will provide the reader with a more detailed account. I shall also briefly discuss some findings from proofreading research which may help explain some of the problems encountered by L2-student writers during the Editing process.

The translating sub-processes

Cooper and Matsuhashi (1983) posit four stages, which correspond to Hayes and Flower’s (1980) Translating: Wording, Presenting, Storing and Transcribing. In the first stage, the brain transforms the propositional content into lexis. Although at this stage the pre-lexical decisions the writer made at earlier stages and the preceding discourse limit lexical choice, Wording the proposition is still a complex task: ‘the choice seems infinite, especially when we begin considering all the possibilities for modifying or qualifying the main verb and the agentive and affected nouns’ (Cooper and Matsuhashi, 1983: 32). Once s/he has selected the lexical items, the writer has to tackle the task of Presenting the proposition in standard written language. This involves making a series of decisions in the areas of genre, grammar and syntax. In the area of grammar, Agreement, Word-order and Tense will be the main issues for L1_English learners of languages like French, German, Italian or Spanish.

The proposition, as planned so far, is then temporarily stored in Working Short Term Memory (henceforth WSTM) while Transcribing takes place. Propositions longer than just a few words will have to be rehearsed and re-rehearsed in WSTM for parts of it not to be lost before the transcription is complete. The limitations of WSTM create serious disadvantages for unpractised writers. Until they gain some confidence and fluency with spelling, their WSTM may have to be loaded up with letter sequences of single words or with only 2 or 3 words (Hotopf, 1980). This not only slows down the writing process, but it also means that all other planning must be suspended during the transcriptions of short letter or word sequences.

The physical act of transcribing the fully formed proposition begins once the graphic image of the output has been stored in WSTM. In L1-writing, transcription occupies subsidiary awareness, enabling the writer to use focal awareness for other plans and decisions. In practised writers, transcription of certain words and sentences can be so automatic as to permit planning the next proposition while one is still transcribing the previous one. An interesting finding with regards to these final stages of written production comes from Bereiter, Fire and Gartshore (1979) who investigated L1-writers aged 10-12. They identified several discrepancies between learners’ forecasts in think-aloud and their actual writing. 78 % of such discrepancies involved stylistic variations. Notably, in 17% of the forecasts, significant words were uttered in forecasts which did not appear in the writing. In about half of these cases the result was a syntactic flaw (e.g. the forecasted phrase ‘on the way to school’ was written ‘on the to school’). Bereiter and Scardamalia (1987) believe that lapses of this kind indicate that language is lost somewhere between storage in WSTM and grapho-motor execution. These lapses, they also assert, cannot be described as ‘forgetting what one was going to say’ since almost every omission was reported on recall: in the case of ‘on the to school’, for example, the author not only intended to write ‘on the way’ but claimed later to have written it. In their view, this is caused by interference from the attentional demands of the mechanics of writing (spelling, capitalization, etc.), the underlying psychological premise being that a writer has a limited amount of attention to allocate and that whatever is taken up with the lower level demands of written language must be taken from something else.

In sum, Cooper and Matsuhashi (1983) posit two stages in the conversion of the preverbal message into a speech plan: (1) the selection of the right lexical units and (2) the application of grammatical rules. The unit of language is then deposited in STM awaiting translation into grapho-motor execution. This temporary storage raises the possibility that lower level demands affects production as follows: (1) causing the writer to omit material during grapho-motor execution; (2) leading to forgetting higher-level decisions already made. Interference resulting in WSTM loss can also be caused by lack of monitoring of the written output due to devoting conscious attention entirely to planning ahead, while leaving the process of transcription to run ‘on automatic’.

How about editing? Some insights from proofreading research

Proofreading theories and research provide us with the following important insights in the mechanisms that regulate essay editing. Firstly, proofreading involves different processes from reading: when one proofreads a passage, one is generally looking for misspellings, words that might have been omitted or repeated, typographical mistakes, etc., and as a result, comprehension is not the goal. When one is reading a text, on the other hand, one’s primary goal is comprehension. Thus, reading involves construction of meaning, while proofreading involves visual search. For this reason, in reading, short function words, not being semantically salient, are not fixated (Paap, Newsome, McDonald and Schvaneveldt, 1982). Consequently, errors on such words are less likely to be spotted when one is editing a text concentrating mostly on its meaning than when one is focusing one’s attention on the text as part of a proofreading task (Haber and Schindler, 1981). Errors are likely to decrease even further when the proofreader is forced to fixate on every single function word in isolation (Haber and Schindler, 1981).

It should also be noted that some proofreader’s errors appear to be due to acoustic coding. This refers to the phenomenon whereby the way a proofreader pronounces a word/diphthong/letter influences his/her detection of an error. For example, if an English learner of L2-Italian pronounces the ‘e’ in the singular noun ‘stazione’ (= train station) as [i] instead of [e], s/he will find it difficult to differentiate it from the plural ‘stazioni’ (= train stations). This may impinge on her/his ability to spot errors with that word involving the use of the singular for the plural and vice versa.

The implications for language learning are that learners may have be trained to go through their essays at least once focusing exclusively on form. Secondly, they should be asked to pay particular attention to those words (e.g. function words) and parts of words (e.g. verb endings) that they may not perceive as semantically salient.

Bilingual written production: adapting the first language model

Writing, although slower than speaking, is still processed at enormous speed in mature native speakers’ WSTM. The processing time required by a writer will be greater in the L2 than in the L1 and will increase at lower levels of proficiency: at the Wording stage, more time will be needed to match non-proceduralized lexical materials to propositions; at the Presenting stage, more time will be needed to select and retrieve the right grammatical form. Furthermore, more attentional effort will be required in rehearsing the sentence plans in WSTM; in fact, just like Hotopf’s (1980) young L1-writers, non- proficient L2-learners may be able to store in WSTM only two or three words at a time. This has implications for Agreement in Italian, French or Spanish in view of the fact that words more than three-four words distant from one another may still have to agree in gender and number. Finally, in the Transcribing phase, the retrieval of spelling and other aspects of the writing mechanics will take up more WSTM focal awareness.

Monitoring too will require more conscious effort, increasing the chances of Short-term Memory loss. This is more likely to happen with less expert learners: the attentional system having to monitor levels of language that in the mature L1-speaker are normally automatized, it will not have enough channel capacity available, at the point of utterance, to cope with lexical/grammatical items that have not yet been proceduralised. This also implies that Editing is likely to be more recursive than in L1-writing, interrupting other writing processes more often, with consequences for the higher meta-components. In view of the attentional demands posed by L2-writing, the interference caused by planning ahead will also be more likely to occur, giving rise to processing failure. Processing failure/WSTM loss may also be caused by the L2-writer pausing to consult dictionaries or other resources to fill gaps in their L2-knowledge while rehearsing the incomplete sentence plan in WSTM. In fact, research indicates that although, in general terms, composing patterns (sequences of writing behaviours) are similar in L1s and L2s there are some important differences.

In his seminal review of the L1/L2-writing literature, Silva (1993) identified a number of discrepancies between L1- and L2-composing. Firstly, L2-composing was clearly more difficult. More specifically, the Transcribing phase was more laborious, less fluent, and less productive. Also, L2-writers spent more time referring back to an outline or prompt and consulting dictionaries. They also experienced more problems in selecting the appropriate vocabulary. Furthermore, L2-writers paused more frequently and for longer time, which resulted in L2-writing occurring at a slower rate. As far as Reviewing is concerned, Silva (1993) found evidence in the literature that in L2-writing there is usually less re-reading of and reflecting on written texts. He also reported evidence suggesting that L2-writers revise more, before and while drafting, and in between drafts. However, this revision was more problematic and more of a preoccupation. There also appears to be less auditory monitoring in the L2 and L2-revision seems to focus more on grammar and less on mechanics, particularly spelling. Finally, the text features of L2-written texts provide strong evidence suggesting that L2-writing is a less fluent process involving more errors and producing – at least in terms of the judgements of native English speakers – less effective texts.

Implications for teachers

Essay writing is a very complex process which poses a huge cognitive load onto the foreign language learner’s brain (on its Working Memory to be precise). This cognitive load is determined by the fact that the L2 student writer has to plan the essay whilst focusing on the act of translating ideas (propositions) into the foreign language. Translating, as I have tried to illustrate, is hugely complex per se for a non-native speaker, let along when the brain has to hold in his/her Working Memory the ideas s/he intends to convey at the same time. Working Memory being limited in capacity it is easy to ‘lose’ one or the other in the process and equally easy to make mistakes, as the monitor (i.e. the error detecting system in our brain) receives less activation due to cognitive overload.

Hence, before plunging our students into essay writing teachers need to ensure that they provide lots of practice in the execution of the different sets of skills that writing involves (e.g. ideas generation, planning, organization, self-monitoring) separately. For instance, a writing lesson may involve sections where the students are focused on discrete sets of higher order skills (e.g. practising idea generation; evaluating relevance of the ideas generated to a given topic/essay title) and sections where lower order skills are drilled in ( application of grammar and syntax rules, lexical recall, spelling). Only when the students have reached a reasonable level of maturity across most of the key skills embedded in the models discussed above should students be asked to engage in extensive writing.

Consequently, an effective essay-writing instruction curriculum must identify the main skills involved in the writing process (as per the above model); allocate sufficient time for their extensive practice as contextualized within the themes and text genres relevant to the course under study; build in the higher order skill practice opportunities to embed practice in the lower order skills identified above (the mechanics of the language), whilst being mindful of potential cognitive overload issues.

In terms of editing, the above discussion has enormous implications as it suggests that teachers should train learners to become more effective editors through regular editing practice (e.g. ‘Error hunting’ activities). Such training may result in more rapid and effective application of editing skills in real operating conditions as the execution of Self-Monitoring will require less cognitive space in Working Memory. Training learners in editing should be a regular occurrence in lessons if we want it to actually work; also, it should be contextualized in a relevant linguistic environment as much as possible (e.g. if we are training the students to become better essay editors we ought to provide them with essay-editing practice, not just with random and uncontextualized sentences).

In conclusion, I firmly believe that the above model should be used by every language teacher, curriculum designer as a starting point for the planning of any writing instruction program. Not long ago I took part in a conference and a colleague was recommending to the attending teachers to give his Year 12 students exam-like discursive essays to write, week in week out for the very first week of the course. I am not ashamed to admit that I used to do the same in my first years of teaching A levels. The above discussion, however, would suggest that such an approach may be counterproductive; it may lead to errors, fossilization of those errors, and inhibit proficiency development whilst stifling the higher metacomponents of the writing process, idea-generation, essay organization and self-monitoring.

12 metacognition-modelling strategies for the foreign language classroom

wilson-metacognition-460x345

Metacognitive skills are arguably the most important set of skills we need for our journey through life as they orchestrate every cognitive skill involved in problem-solving, decision-making and self-monitoring (both cognitive and socio-affective). We start acquiring them at a very early age at home, in school, in the playground and in any other social context an individual interacts with other human beings. But what are metacognitive skills?

What is metacognition?

I often refer to metacognition as ‘the voice inside your head’ which helps you solve problems in life by asking you questions like:

  • What is the problem here?
  • Based on what I know already about this task, how can I solve this problem?
  • Is this correct?
  • How is this coming along?
  • If I carry on like this where am I going to get?
  • What resources should I use to carry out this task?
  • What should come first? What should come after?
  • How should I pace myself? What should I do by when?
  • Based on the criteria I am going to be evaluated against, how am I doing?

The challenge is not only to develop our students’ ability to ask themselves these questions, but also, and more importantly, to enable them to do this at the right time, in the right context and to respond to those questions promptly, confidently and effectively by applying adequate cognitive and social strategies.

How does one become highly ‘metacognizant’?

Let us look at two subjects from an old study of mine, student A and student B, in the examples below.The reader should note that the data below were elicited through a technique called concurrent think-aloud protocol (i.e. the two students were reflecting on the errors in their essays, whilst verbalizing their thoughts).

Self-questioning by student A:

Question: What is the problem here?

  • Too many spelling mistakes
  • I must check my essay more carefully with the help of the dictionary
  • I also need to go through it more times than I currently do, I think

Self-questioning by student B:

Question 1: What is the problem in my essay?

  • There are too many spelling mistakes
  • I need to check my essay more thoroughly
  • I rarely use the dictionary I usually trust my instinct
  • I also need to go through it three or four times

Question 2: What are my most common spelling mistakes?

  • Cognates, I get confused
  • Longer words, I struggle with those, too
  • I usually make most of my mistakes toward the end of the essay
  • I also make mistakes in longer sentences

Question 3: But why in longer sentences?

  • Maybe because I tend to focus on verbs and agreement more than I do on spelling

Both students identify the same problems with the accuracy in their essays. They both start with the same identical question, but Student B investigates it further through more self-questioning. In my study, which investigated metacognitive strategies, most of my informants tended to be more like student A; very few went spontaneously, without any prompt from me, as far as student B, in terms of metacognitive self-exploration.

How did student B become so highly metacognizant? Research indicates that, apart from genetic factors (which must not be discounted), the reason why some people become more highly metacognizant than others is because that behavior is modelled to them; in other words, caregivers, siblings, people in their entourage have regularly asked those questions in their presence and have used those questions many a time to guide them in problem solving or self-reflection. I cannot forget how my father kept doing that to me, day in day out since a very early age: ‘why do you think it is like this?’, ‘how could we fix this?’, ‘why do you think this statement is superficial?’, ‘how can you write this introduction better?’ – he would ask. I used to hate that, frankly, as I would have preferred to just get on with reading my favourite comics or watching tv; but it paid off. The intellectual curiosity, the habit of looking at different angles of the same phenomenon, the constant quest for self-improvement that I eventually acquired were ultimately modelled by those questions.

This is what a good teacher should do: spark off that process, by constantly modelling those questions, day in day out, in every single lesson, so as to get students to become more and more aware of themselves as language learners: what works for them and what doesn’t; what their strengths and weaknesses are and what they can do to best address them; how they can effectively tackle specific tasks; what cognitive or affective obstacles stand in the way of their learning; how they can motivate themselves; how can they best use the environment, the people around them, internet resources, etc. in a way that best suits them, etc.

Twelve easy steps to effective modelling  of metacognitive-enhancing questioning

But how do we start, model and sustain that process? There are several approaches that one can undertake in isolation, or, synergistically. The most effective is Explicit Strategy Instruction, whereby the teacher presents to the students one or more strategies (e.g. using a mental checklist of one’s most common mistakes in editing one’s essay); tells the students why it/they can be useful in improving their performance (reduce grammatical, lexical and spelling errors); scaffolds it for weeks or months (e.g. asks them to create a written list of their most common mistakes to use every time they check an essay produced during the scaffolding period); then phases out the scaffolding and leaves the students to their own devices for a while; at the end of the training cycle, through various means, the teacher checks if the target strategy has been learnt or not.

The problem is, with two hours’ teacher contact time a week, doing the above properly is a very tall order, and the learning gains in terms of language proficiency may not justify the hassle. I implemented a Strategy Instruction program as part of my PhD study; it was as effective as time-consuming and I could afford it because I was a lecturer on a 14-hour time-table. Would I recommend it to a full-time secondary teacher in a busy UK secondary school? Not sure…So what can we do to promote metacognitive skills in the classroom?

There are small and useful steps we can take on a daily basis which can help, without massively adding to our already heavy workload. They involve more or less explicit ways of modelling metacognitive or metacognitive-enhancing self-questioning. Here are some of the 41 strategies I have brainstormed before writing this article.

  1. At the beginning of each lesson, after stating the learning intentions, ask the students how what they are going to learn may be useful/relevant to them (e.g. ‘Why are we learning this?’, ‘How is this going to help you be better speakers of French?’)
  2. Before starting a new activity ask the students how they believe it is related to the learning intentions; what and how they are going to learn from that activity (e.g. ‘Why are we doing this?’);
  3. On introducing a task, give an example of how you would carry out that activity yourself (whilst displaying it on the interactive whiteboard/screen) and take them through your thought processes. This is called ‘think-aloud’ in that you are verbalizing your thought processes, including the key-questions that trigger them (e.g.: I want to guess the meaning of the word ‘chère’ in the sentence “C’est une voiture chère”. I ask myself: is it a noun, an adjective,…? It is an adjective because it comes after the word ‘voiture’ which is a noun. Is it positive or negative? It must be positive because it I cannot see ‘pas’ here. Does it look like any English word I know? No, it doesn’t… but I have seen this word at the beginning of a letter as in ‘Chère Marie’… so it can mean ‘dear’ … How can a car be ‘dear’? Oh I get it: it means expensive. It is an expensive car!)
  4. At the end of a task, ask students to self-evaluate with the help of another student (functioning as a moderator, rather than a peer assessor) using a checklist of questions, the use of which you would have modelled through think-aloud beforehand. For the evaluation of a GCSE-like conversation this could include: Where the answers always pertinent? Was there a lot of hesitation? Was there a good balance of nouns, adjectives and verbs? Were there enough opinions? Were there many mistakes with verbs? Etc.
  5. Encourage student-generated metacognitive questioning by engaging students in group-work problem-solving activities. The rationale for working in a group on this kind of activities is that at least one or two of the students in the group (if not all of them) will ask metacognition-promoting questions and by so doing they will model them to the rest of group. If this type of activities become daily practice (in all lessons, not just MFL ones), the questions they generate might become in the long-term incorporated in one’s repertoire of thinking skills. Such activities may include: (1) inductive grammar tasks, where students are given examples of a challenging grammar structure and they have to figure out how the rules governing that structure work (see my activity on French negatives:https://www.tes.co.uk/teaching-resource/inductive-task-on-negatives-6316942 ) ; (2) inferring the meaning of unfamiliar words in context; (3) Real life problem solving tasks: planning a holiday and having to reserve tickets online, find out a hotel that suits a pre-defined budget, etc.
  6. Get students, after completing a challenging task, to ask themselves questions like: “what did I find difficult about it?”; “Why? ”; “What did I not know?”, “What will I need to know next time?”.
  7. On giving students back their corrected essays, scaffold self-monitoring skills by getting them to ask themselves: “Which ones of the mistakes I made in this essay do I make all the time?”, “Why?”, “What can I do to avoid them in the future?”
  8. Every now and then (do not overdo this), at key moments in the term, get the students to ask themselves questions about the way they learn, e.g. After telling them, concisely and using a fancy diagram (e.g. the curve of forgetting by Ebbinghaus) how and when forgetting occurs, ask them to reflect on what distracts them in class or at home and what one can do to eliminate those distracting factors;
  9. At the beginning of each school year, to get them into a reflective mood and to gain a valuable insight into their learning habits and issues, ask them to keep a concise reflective journal to write at end of each week with a few retrospective questions about their learning that week. Avoid questions like: “What have I learnt this week?” Focus on questions aimed at eliciting problems about their learning and what they or you can do to address them.
  10. Ask them, whilst writing an essay, to review the final draft of the essay and ask themselves the question: “What is that I am not sure about?” and ask them to highlight every single item in that essay evoked by that question.
  11. Ask them, at the end of a lesson, to fill in a google form or just write on a piece of paper to hand in to you the answer to the questions: “What activity benefitted me the most today? Why?”
  12. Ask your students to think about the ways they reduce their anxiety in times of stress (e.g. the run-up to the French end-of-year exams?); do they always work? Are there any other techniques they can think of to keep stress at bay? Are there any other techniques ‘out there’ (e.g. on the Internet) that might work better? I have done this with a year 8 class of mine and I was truly amazed at the amount of effort they put into researching (at home, of course) self-relaxation techniques and at the quality of their findings (which they shared with their classmates).

It goes without saying that there are classes with which one would be able to do all of the above and others where one will be lucky if one can use one or two of the above strategies. It is also important to keep in mind that by over-intellectualizing language learning in the classroom you may lose some of the students; hence one should use those strategies regularly but judiciously and, most importantly, to serve language learning – not to hijack the focus of the lesson away from it . The most important thing is that the students are exposed to them on a daily basis until they are learnt ‘by osmosis’ so to speak.

Metacognitive literacy and explicit instruction

Ideally, the modelling and fostering of metacognitive self-questioning will be but the beginning of a more explicit and conscious process on the part of the teacher, to, once s/he believes the student have reached the maturity necessary to do so, impart on them a metacognitive literacy program. By this I mean that, just as we assign a name, in literacy instruction, to each part of speech or word class (e.g. adjective, noun, ect.), we should also acquaint them with what each metacognitive strategy is called,  what purpose it serves and which of the questions modelled to them over the months or years it relates to. The importance of sharing a common language is crucial in any kind of learning, especially when dealing with high order thinking skills. After all, as Wittgenstein said; “The limits of my language, are the limits of my world”.

Once that common language is well-established in the classroom, the implicit metacognitive modelling that the teacher has embedded regularly in his/her lesson can be made explicit and strategy training can be implemented using the framework that I have already outlined above and that I reserve to discuss at greater length in a future post:

1. Strategies are named and presented

2. Strategies are modelled

3. Strategies are practised with scaffolding

4. Strategies are used without scaffolding

5. Strategies uptake is verified by test and/or verbal report

Why reading comprehension tasks can be detrimental to L2-reading skills development

Gianfranco Conti, Phd (Applied Linguistics), MA (TEFL), MA (English Lit.), PGCE (Modern Languages and P.E.)'s avatarThe Language Gym

images (4)

The enhancement of reading skills proficiency in foreign language learners has never been as crucial to their linguistic and holistic development as in the 21st century classroom, due to the prominent role that digital technology and the Internet play in their lives. The Internet allows foreign language students easier and cheaper access to masses of information without having to purchase or borrow a book, and allows for a vast variety of choice of topics and text-types.

The goals of reading in the 21st century classroom

With this in mind, in this day and age, more than ever, in their daily practice, curriculum planners, L2-instructional material writers and teachers need to have reading proficiency development in their focal rather than subsidiary awareness, striving, as much as possible, to enable learners to become competent autonomous readers. This means ensuring that they :

  1. WANT to read independently – this implies experiencing…

View original post 1,597 more words

Parallel texts – How they can enhance learning and effectively scaffold reading proficiency development

Gianfranco Conti, Phd (Applied Linguistics), MA (TEFL), MA (English Lit.), PGCE (Modern Languages and P.E.)'s avatarThe Language Gym

images (5)

A few days ago, one of my colleagues approached me in the MFL Department corridor to share a resource he referred to as ‘Parallel texts’ from Steve Smith’s www.frenchteacher.net . ‘This is excellent!’ – he said, showing me a worksheet (here: http://frenchteachernet.blogspot.co.uk/2014/05/parallel-reading-texts-for-near.html) . This contained a text in French on daily routine on the left hand-side and its translation on the right; some comprehension activities were included, too. ‘My students find them very useful!’ he added.

I hardly needed any convincing as I had used Parallel Texts (French / Italian) myself in the past when working as a translator for the European Union in order to refine my English, day in day out for a few months – and it paid off; my English vocabulary, syntax and awareness of text-specific discourse features grew exponentially as a result. In this article I will show what the potential benefits of using…

View original post 1,207 more words

Five compelling reasons to ‘over-emphasize’ pronunciation at Primary school (or in the early stages of acquisition)

1. To facilitate and ‘speed up’ the development of speaking proficiency – as Levelt’s model of first language speaking production posits (see picture 1, below), spoken output requires the orchestration of many complex processes, some more complex than others, but all placing serious demands on our brain, in terms of processing efficiency. The speech production process starts in the conceptualizer that generates ideas and ‘sends’ them to the formulator which translates them into meaningful and grammatically correct sentences; then the monitoring system steps in checking the output accuracy before any sentence is uttered; finally, the articulator will orchestrate the use of the larynx, pharynx and mouth organs, whilst the monitoring system will have been overseeing speech production every step of the way.This process becomes even harder when the ideas generated by the brain (in the conceptualizer) need to be translated into speech in real time in a foreign language; the whole process slows down considerably – hence the hesitations and pauses in our language learners, even the more advanced ones, when speaking in the target language; and their errors, due more to cognitive overload than to carelessness (unless by ‘carelessness’ we mean lack of effective monitoring).

Picture1–Levelt’s model of language production (adapted from: http://homepage.ntlworld.com)

 LEVELTa

The complexity of the language production process and the challenges of performing it in a foreign language mean that our Working Memory has to juggle simultaneously all of the tasks that speaking involves and may lead less expert L2 speakers, as already mentioned, to slow down production and make mistakes due to processing efficiency. Hence, foreign language learners need to learn to master the lower order skills, i.e. effective control over larynx, pharynx and articulators  as early as possible in order to ‘free up space’ in Working Memory for the kind of cognitive processing that happens in the conceptualizer and formulator; this will allow Working Memory to work more efficiently and focus only on the higher order skills, i.e.: the negotiation and creation of meaning, the transformation of meaning into the target language vocabulary, the application of grammar rules and self-monitoring.

Consequently, if we do not foster the automatization of pronunciation earlier on, through regular practice, we will delay our students’ development as fluent speakers. I have experienced this often in the past on taking on Year 7 classes which had been given very little pronunciation and/or speaking practice during two years of French/Spanish in Primary. Is this an argument against the Comprehensible Input hypothesis or The Silent way? Maybe.

2.Fossilization – On the other hand, if we plunge L2 learners into highly demanding oral tasks too soon, without focusing long and hard enough on pronunciation through easy and controlled tasks, their Working Memory will  have less monitoring space for sound articulation, as they will focus on the generation of meaning (i.e. what happens in picture 1’s conceptualizer) with potentially ‘disastrous’ consequences for  their pronunciation , in that they will resort to their first language phonological encoding to produce the target language sounds (language transfer). If pronunciation errors due to this issue will keep slipping into performance lesson after lesson, oral practice after oral practice, the mistakes will become fossilized and carried over to later stages of proficiency as fossilized errors tend to be impervious to correction.

3.Phonological encoding affects recall – the more the learners become versed in the articulation of the target language sounds, the faster and more effective will be their retention of target language words in their Long-term memory (encoding). Why? This is because of the limitations of the ‘phonological encoding device’ in our brain’s Working Memory, i.e.: the articulatory loop (see picture 2 below). The articulatory loop has limited space (or channel capacity as psycholinguists call it), hence, if a word is not pronounced ‘fast enough’ the brain may not simply be able to encode it. The faster the articulatory loop ‘pronounces’ the target language the easier it will be (a) to memorize new words, especially longer and more challenging ones (from a phonological point of view) and (b) for Working Memory to process longer units of language (phrases/sentences) – as the less space words take on in the articulatory loop the greater the chances will be for longer words to be held in Working Memory at any one time. This speeds rehearsal in Working Memory and, consequently, uptake as well retrieval and production.

 Picture 2 – adapted from: http://homepage.ntlworld.com/vivian.c/SLA/STM.htm

 images (6)

4. Stigmatizing output may lead to simpler L2-input from L2-native speakers – I am not a fan of Stephen Krashen but I do admit that he has come up with some great ideas, such as Narrow Listening and the one that relates to the point I am about to make; the fact, that is, that if a beginner/intermediate learner has bad pronunciation, any L2 expert who will interact with him/her may actually send easier L2 linguistic input his/her way presuming that his/her level of proficiency and/or language aptitude – as signaled by his/her pronunciation – is low. This will have negative implications for  learning as if you are exposed to simplified input when you are at the early stages of language learning you may not learn much from it, not enough to bring you to next level, so to speak.

I tend to agree with Krashen on this one as I have seen this happen several times. And I will add, that often, in naturalistic environments, something even worse may occur: L2 native speakers may avoid engaging in conversations with L2 non-native speakers with poor pronunciation, for fear of not being understood or not understand and having going through the awkward process of asking the interlocutor to repeat.

 5. The critical age hypothesis – This applies only to the first years of Primary school, when children are 5 to 8 years old (or younger); the age, that is, where the sensory-motor skills which control the movement of the larynx, pharynx and the articulators are still ‘plastic’, i.e. amenable to modification. After that age, it seems that the child’s receptiveness to pronunciation modelling/instruction diminishes drastically. If this is true, as compelling recent research evidence suggests, it is at this age that learners must be focused on pronunciation and taught ‘phonics’, pretty much as happens in their first language lessons, through fun and engaging speaking activities, lots of singing, listening and computer assisted phonetic learning.

Five important flaws of GCSE oral tests

download (10)

Research has highlighted a number of issues with oral testing which examination boards and teachers need to heed, as they can have important implications not just for the way GCSE syllabi are designed, but also for the conduct and the assessment of oral GCSE exams as well as for our teaching. The issues which I will highlight are, I suspect, generalizable to either types of oral tests conducted in other educational systems, especially when it comes to the reliability of the assessment procedures and the authenticity of the tasks adopted. They are important issues, as they bring into question the fairness and objectivity of the tests as well as whether we are truly preparing children for real L2-native-like communication.

Issue n.1 – Instant or delayed assessment?

A study by Hurman (1996), cited in Macaro (2007) investigated to what extent examiners’ assessment of content and accuracy of candidates’ responses to GCSE role-play affected tests. Hurman took 60 experiences examiners and divided them into groups; one spent some time before awarding the mark and one did it instantaneously. Hurman’s findings indicate that waiting a few seconds before awarding the mark seem to result in more objective grading. This, in my view, is caused by the divided attention that listening and focusing on assessment causes – I have experienced this first-hand many times!

This has important implications for teachers working with certain examination boards. Cambridge International Examinations board, for instance, prescribes that, at IGCSE, the teacher/examiner award the mark instantaneously and explicitly forbids the practice of grading the candidates retrospectively or after listening to the recording. If Hurman’s findings were to be true of the vast majority of examiners, examination boards like CIE may have to change their regulations and allow for marking to be done retrospectively when the examiner’s attention is not divided between listening to the candidate’s response to a new question, when still marking the previous one – an ominous task!

Issue n.2 – What does complexity of structures/language mean?

This is another crucial issue, which I found year in year out when moderating oral GCSE/IGCSE candidate’s oral performance during my teaching career. Teachers listening to the same recording usually tend to agree when it comes to complexity of vocabulary but not necessarily when it comes to complexity of grammar/syntactic structures. Chambers and Richards’ (1992) findings indicate that this is not simply my experience; their evidence suggests that there was a high level of disagreement amongst the teachers involved in their study as to what constituted ‘complexity of structures’. They also found that the teachers disagreed also in terms of what was meant by ‘fluency’ and ‘use of idiom’ – another issue that I have experienced myself when moderating.

To further complicate the picture, there is, in my view, another issue which research should probe into, and I invite colleagues who work with teachers of nationalities to investigate; the fact, that is, that L1-target-language-speaker raters tend to be stricter than L2-target-language-speaker ones. This issue is particularly serious in light of Issue n.5 below.

Issue n. 3 – Are the typical GCSE oral tasks ‘authentic’?

I often play a prank on my French colleague Ronan Jezequel , by starting a conversation about the week-end just gone by asking question in a GCSE-like style and sequence until, after a few seconds, he realizes that there is something wrong and looks at me funny… Are we testing our students on (and preparing them for) tasks that do not reflect authentic L2 native speaker speech? This is what another study by Chambers (1995) set out to investigate. They examined 28 tapes of French GCSE candidates and compared them to conversations on the same themes by 25 French native speakers in the same age bracket. They found that not only, as easily predictable, the native speakers used more words (437 vs 118) and more clauses (56.9 vs 23.9), but also that:

  1. The French speakers found the topic house/flat description socially unacceptable;
  2. The native speakers found the topic ‘Daily routine’ them unauthentic and – interestingly – produced very few reflexive verbs
  3. The native speakers used the near future whilst the non-natives used the simple future
  4. The native speakers used the imperfect tense much more than the non-natives
  5. The non-native speakers used relative causes much less than the French

Are these tests, as the researchers concluded, testing students’ ability to converse with native speakers or their acquisition of grammar?

Issue n.4 – The grammar accuracy bias

A number of studies (e.g. Alderson and Banerjee, 2002) have found time and again that assessors’ perception of grammar accuracy seem to bias examiners, regardless of how much weight is given in the assessment specification on effective communication. This issue will be exacerbated or mitigated depending on the examiners’ view of what linguistic proficiency means and by their degree of tolerance of errors; whereas a teacher might find a learner’s communicatively effective use of compensation strategies (e.g. approximation or coinage) a positive thing even though it leads to grammatically flawed utterances, another might find it unacceptable.

Here again, background differences are at play. Mistakes that to a native speaker might appear as stigmatizing or very serious might seem mild or not even be considered as mistakes at all…

Issue n.5 – Inter-rater reliability

This is the biggest problem of all and it is related to Issue n.2, above; how reliable are the assessment procedures? Many years of research have shown that for any multi-trait assessment scale to be effective it needs to be extensively piloted. Moreover, whenever it is used for assessment, two or more co-raters must agree on the scores and, where there is disagreement, they must discuss the discrepancies until agreement is reached. However, when the Internal moderator and the External one, in cases where the recording is sent to the Examination board for the assessment, do not agree…what happen to the discussion that is supposed to take place to reach a common agreement?

Another important issue relates to the multi-traits assessment scales used. First of all they are too vague. This is convenient, because the vaguer they are the easier it is to ‘fiddle’ with the numbers. However, the vagueness of a scale makes it difficult to discriminate between performances when the variation in ability is not that great as it happens in a top set class, for example, with A and A* students. In these cases, in order to discriminate effectively between an 87 and a 90% which could mean getting or not an A*, research shows clearly that the best assessment to be used should contain more than the two or three traits (categories) usually found in GCSE scales (or even A Level, for that matter) and, more importantly, should be more fine-grained (i.e. each category should have more criterion-referenced grades). This would hold examination boards much more accountable, but would require more financial investment and work, I guess, on their part.