On hinge questions

As part of my professional development cycle for this year, I opted to explore the use of hinge questions in English lessons. My initial interest in the technique arose from reading Dylan Wiliam’s Embedded Formative Assessment. On page 100 of his book, he explains how:

‘The hinge is a point at which the teacher checks whether the class is ready to move on through the use of a diagnostic question. How the lesson proceeds depends on the level of understanding shown by the students, so the direction of the lesson hinges at this point.’

For Wiliam, these questions take the form of a multiple choice question, with very carefully worded distractors among the different response choices. An excellent discussion of how hinge questions can be used to correct misconceptions is available on Joey Bagstock’s blog. He makes a sound argument for the utility of multiple choice questions in improving comprehension and understanding.

One of my queries with this, however, was over how effective hinge questions could be in guiding students from sound knowledge of a text to a considered personal appreciation of it. More specifically, I was interested in exploring how an extended response to a hinge question might form a precursor to analytical writing. Certainly, the idea of a ‘hinge’ point in a lesson dependent upon sequential knowledge mastery is a sound one; if you want students to correct a misconception, then a hinge question is an effective way to address this. Multiple-choice questions become useful, therefore, in ironing out comprehension errors, or in identifying when students limit their interpretations of quotes to surface details. But how useful would they be if the aim was not to clarify a misconception, but to force students to defend and justify a position?

This is where my interpretation of a hinge question widens slightly from that put forward by Wiliam and others. For me, hinge questions can serve a dual function: to address potential misconceptions (via a multiple choice question) and to steer a lesson into extended personal interpretations of texts.

An example of how I have used this comes from my current Year 10 English class. At present, we are studying comparative analysis – Montana 1948 and the documentary The Tall Man. In The Tall Man, at the point where the defendant in a trial (a police officer who has been accused of the manslaughter of an aboriginal man held in police custody) is found not guilty, the film overlays the sound of a pedestrian crossing ticking. It is an interesting stylistic choice which hints at the endemic corruption within the Queensland justice system and the inevitability that it will close ranks and protect their own. The hinge question I set was quite simply:

What might the significance be of the pedestrian crossing sound?

I ran a standard think-pair-share-square routine and asked students to flesh out their response notes at each step of the discussion. Below is an example of the notes made by one of my students, Maddie (who has given me permission to publish her work):

Maddie Qu notes

The challenge then became one of how this discussion (and these exploratory notes) would transfer into her writing. Maddie is ranked near the middle of our Year 10 cohort and in previous years has scored close to the national average for NAPLAN reading and writing. During the next lesson, my class had to write an analytical paragraph under timed conditions, which was to be formatively assessed. They were given fifteen minutes to draw up a brief plan (with paired discussion allowed) and then thirty minutes to write their paragraph. Here is her work:

Maddie V1

Given that this was an initial draft, the quality of this was rather promising. Near the end of the paragraph you can see the emergence of the hinge question notes, with a good attempt at close technical analysis. Not perfect, but for an ‘average’ student this is encouraging. The following lesson, I ran a whole-class feedback session with integrated DIRT time. Here is her improved version, written in 20 minutes:

Maddie rewrite pg1

Maddie rewrite pg2

Now she is approaching the depth I am looking for. While there are mechanical accuracy and expression issues (which she is working hard to overcome), the quality of insight is sharper and, crucially, offers a measure of independent thinking. This alternative use of a hinge question in conjunction with scaffolded discussion has begun to pay dividends for her. It is an approach that I will continue to refine over the remainder of this year.

On using marking codes for VCE English essays

‘That’s him pushing the stone up the hill, the jerk. I call it a stone – it’s nearer the size of a kirk. When he first started out, it just used to irk, but now it incences me, and him, the absolute berk.’

Carol Ann Duffy, ‘Mrs Sisyphus’

 

Marking has always been for me, I confess, something of an onerous task. Time-consuming, draining, laborious and – to be brutally honest with myself – quite ineffective at advancing learning.

Over the last few years my use of verbal feedback and continual checking for understanding has developed enormously. I frequently use hinge questions, multiple choice quizzes, thinking routines, exit tickets, bursts of silent writing where I circulate around the room and add instant comments while pupils write, etc. Through this, I have become reasonably effective at preventing major misconceptions from creeping into essays before they are submitted. That said, marking full VCE essays remains my metaphorical boulder. I have always been guilty of over-marking essays with the usual litany of methods: highlighting errors, comments in the margins, SMART targets at the end of the piece with a summative comment, etc. In part, this is because it was the default expectation of teachers when I first entered the profession, reinforced by the then expectations of Ofsted (The UK national inspectorate).

It took me a long time to fully face up to the one question that every English teacher should ask themselves when they are about to pick up their red pen:

How exactly is that mark on the page going to help the student improve?

Much recent discussion of marking, such as from Jo Facer at Michaela school in the UK, argues eloquently in favour of not writing comments on students’ work and instead on prioritising continual whole class feedback on common errors and weaknesses. There is a great deal of merit (and sanity) in that approach and I have begun to use it increasingly with Years 7-9 classes, often with single paragraph pieces. I will share my own methodology on this in a future post. (For a more comprehensive review of feedback approaches, the recent EEF report is essential reading.) That said, for VCE English essays I still find it difficult to ‘let go’ of marking and have sought to find a balance between offering a clear indicator of what to address while minimising my annotations themselves. As a consequence, over the last couple of years I have experimented with two different feedback approaches: marking codes and mastery grids.

I first began using marking codes back in 2010, when I was appointed Head of English at Silcoates School in the UK. One of my colleagues, Russell Carey, was Chief Assessor for the Cambridge IGCSE Literature exam paper and explained to me their process of using internal marking codes for assessing pupil scripts. We rolled this out across the department to create a consistent system of annotating work (Brief aside: Ask yourself this question: what does a tick on a page actually mean to a pupil? Do they know? Do all teachers mean the same thing when they tick work?), and pupils appreciated the continuity as they moved through year levels and between teachers.

When I moved to Australia in 2013, I had to learn the VCE system and appreciate the subtle differences in approaches to essay writing here. Consequently, my well-honed system of marking codes for GCSE and A Level responses was forgotten and I reverted back to default mode: lots of generalised comments, ticks and summary targets, with very little class time devoted to re-writes. Having now assessed the VCE English examination for the last two years, I feel more confident in my judgements of quality and have worked out the most common errors I try to correct via annotations. The EEF report offers this summary finding on the need to distinguish between errors (fundamental misconceptions) and mistakes (carelessness) when marking:

‘Careless mistakes should be marked differently to errors resulting from misunderstanding. The latter may be best addressed by providing hints or questions which lead pupils to underlying principles; the former by simply marking the mistake as incorrect, without giving the right answer.’

A link to my new marking codes sheet can be found here:

Marking codes feedback sheet

My workflow when using this is as follows:

  1. Skim read the essay to quickly spot major errors, then close read. Ticks on relevant points and circling SPAG mistakes
  2. After reading, highlight the most pressing 5-6 errors on the piece and assign a code in the margin
  3. Write a ‘one quick win’ target on the bottom of the piece, trying to make the target as absolutely specific as possible.

Time taken per essay (800-1000 words) = 9 minutes

As I mark the essays, I keep a Word document open and dot point any major conceptual or knowledge problems that form a pattern among the class. An example of a summary sheet I give out can be found here (Note: I am teaching Measure for Measure this year for Text Response). The feedback lesson then runs as follows:

  1. Explain and correct the main conceptual errors to the whole class
  2. Show 2x examples of the best writing and annotate why they are strong
  3. Give pupils around 20 minutes to read through their essays, process the codes and write their corrections on the essay itself. I circulate and conference with students as they work on this.

The major strengths of this approach, for me, are:

  • Time saved – around 6 minutes per essay (It would ordinarily take me 15 minutes per essay)
  • Efficiency – I no longer write out the same annotations twenty-odd times
  • DIRT (Directed Improvement and Reflection Time) – students have to act on the feedback, with dedicated time set aside for this

Marking codes are not without their challenges, however, with the main ones being that students often struggle to write meaningful corrections due to poor text knowledge or weak expression. It is essential that the codes are as specific as possible, too, since they replace individualised comments (Note: I am still very much in the process of refining these). Over time, my VCE classes have gotten used to the system and have become more willing to think through their errors.

This year, I decided to pilot a mastery grid approach with my Y12 class to compare it against marking codes. Dylan Wiliam describes one method of using them in Embedded Formative Assessment, pages 122-127. I have taken the general principle of a mastery grid and adapted it for tracking structural components of essays. You can find a copy of it here:

Mastery grid feedback sheet

The marking process I use is more or less the same as for the codes sheet. This time, however, I am aiming to offer a relative indicator of quality for the major structural and technical aspects of their essays (each element still contains a letter code which I can add in the margins to signal that a correction is needed). Interestingly, the majority of my students prefer this system because they can gauge both how good each essay is overall and how each skill is developing across a series of essays. The major challenge with it is that without the use of explicit marking codes, students need to be provided with far more ‘worked examples’ of successful and weaker pieces for students to really engage with the self-correction.

Neither of these systems is perfect and a claim can be made that marking codes and mastery grids offer little tangible benefit over traditional annotation and correction. Ultimately, you have to ask yourself: what compromise are you prepared to accept? For me, these methods offer better written feedback than before in less time.

 

If you feel that this post has been helpful or useful in any way, please let me know in the comments section. I would be particularly keen to hear any suggestions for improvement you may have, or any alternative approaches you may use.

Edit: Based on Emily’s comment, I will also include links to my feedback sheets for Argument Analysis. If you use any of my materials, please let me know how well (or badly!) it worked for you.

Argument Analysis Marking Codes Sheet

Argument Analysis Mastery Grid Sheet