Experiments with Visible Learning showing big promise…
This post builds on my last one about a model for assessing without levels using the principles of John Hattie’s Visible Learning philosophy. Make sure you’ve read my last post before this one.
The first thing I’d like to share is that my worries about time constraints preventing me from using this model were unfounded. After some terrific Twitter feedback I realised the pre and post learning assessments could be done for homeworks. The students see the point in them and so approach them with more focus than they do a traditional homework.
The pre-learning assessment results have significantly impacted my teaching. See below. These are the results from a pre-learning assessment I did with a year 9 class at the start of a topic on area and perimeter. These results changed how I approached the teaching of the lessons on this topic. See the colour coding. There were three lessons I didn’t need to teach at all as the students remembered those from KS3. The lessons I’ve coloured blue were taught using collaborative learning strategies (Kagan etc). Half the students knew how to do them so I used these students to teach the others (reciprocal teaching is a big yes from Hattie). The yellow lessons were taught using direct instruction methods (not dirty words in Hattie’s world). High quality formative feedback genuinely leading my planning.
Then came the post-learning assessment after the lessons:
The first thing that struck me was how poorly students had retained their knowledge from the sectors lessons. Know thy impact is Hattie’s mantra. This truly showed me, brutally, how poor my impact had been in those lessons. Bad for the ego, but great for an objective reflection to see how well you’ve taught things. This also confirmed my long-held belief about how using work in students’ exercise books to make APP-style decisions about learning progress is totally unreliable. The students’ books were full of beautiful sectors calculations, all presented well and accurate. By looking at their books you would conclude their ‘learning’ of sectors was as good as their ‘learning’ of all the other topics. It wasn’t. They didn’t learn the sectors work well at all despite what their books would tell you.
This naturally forced me to raise the question ‘why was their learning from the sectors lesson so poorly retained?’ I considered various options such as:
- was it the quality of my explanation?
- was it the lesson structure?
- were my activities incorrectly pitched?
- had I overloaded their working memory during my explanation?
- had I rushed this topic and not used AFL correctly to tell me when to move on?
There of course, could be endless other explanations, but I concluded with confidence that the way I built up the explanation was at fault. I went into the lesson with the hook ‘how many ghosts can Pacman eat?’ to engage pupils with the idea of thinking about what proportions of circles comprised the compound shapes of Pacman and the ghosts. From that I went into the sector formula etc. I realised in hindsight the step here was too quick and it is likely the students didn’t see the link and so were confused when working with the formula. I think the use of an early context ‘fogged’ their focus on learning how to use the formula and where it came from.
I then retaught the lesson taking a different approach (talking about fractions of circles without angles first before then developing how you can use angles to describe these) and progress was clearly much better after a summative check a couple of days later. The alternative explanation had worked.
The yellow objectives in the table were hit regularly during starters over the following week to ‘iron out’ the final few issues a small number of students were still having with them.
It was naturally disappointing to see that students hadn’t retained as much as in-class AFL told me they had. It hurts the pride somewhat, but once you get over that and accept this is the impact you’ve had, you feel enlightened and know exactly where to focus your intervention efforts. I feel closer to the ‘learning pulse’ than ever before and can ‘visibly see’ (Visible Learning!) what I’ve taught well or not in a way I never have before. I felt empowered to target my next few lessons to plug the gaps, knowing exactly where they were.
The biggest revelation I’ve had from this so far is that in-class AFL (using mini-whiteboards etc), even if done using summative testing does not tell you what has gone into students’ long-term memory. It tells you what has gone into either working memory or long-term memory but not which one. It doesn’t tell you what they’re going to retain. The only way to find out what went into long-term memory is to summatively asssess a few days later. In-class AFL is essential for checking understanding that day and directing lessons at key hinge points, but it is no substitute for summative assessment a finite time after the lesson to find out what was ‘learned’ rather than what was just ‘understood and copied’. You need both. Making progress judgements by looking in exercise books is unreliable for the same reasons as in-class AFL.
My pride was bruised by the post-learning assessment above, but the more I live ‘Visible Learning’, the more I understand what Hattie’s talking about as high-impact practice. I have so much more clarity on what students have learned than ever before. I know when lessons have gone well. I know when they’ve gone poorly despite my ego telling me otherwise. I feel like I’ve been teaching blind until now! I’m ‘plugged in’ to the ‘real learning’ more than ever and make fewer assumptions. I teach based on what they know rather than what I assume they know. It’s made me a more reflective practitioner and more objective.
There’s certainly something in this Visible Learning!
Next step is to trial this with all my KS4 classes next year to see if the model will work across the attainment spectrum in a sustainable way.