Experiments with Visible Learning showing big promise…

This post builds on my last one about a model for assessing without levels using the principles of John Hattie’s Visible Learning philosophy. Make sure you’ve read my last post before this one.

The first thing I’d like to share is that my worries about time constraints preventing me from using this model were unfounded. After some terrific Twitter feedback I realised the pre and post learning assessments could be done for homeworks. The students see the point in them and so approach them with more focus than they do a traditional homework.

The pre-learning assessment results have significantly impacted my teaching. See below. These are the results from a pre-learning assessment I did with a year 9 class at the start of a topic on area and perimeter. These results changed how I approached the teaching of the lessons on this topic. See the colour coding. There were three lessons I didn’t need to teach at all as the students remembered those from KS3. The lessons I’ve coloured blue were taught using collaborative learning strategies (Kagan etc). Half the students knew how to do them so I used these students to teach the others (reciprocal teaching is a big yes from Hattie). The yellow lessons were taught using direct instruction methods (not dirty words in Hattie’s world). High quality formative feedback genuinely leading my planning.

Microsoft_Excel_-_13H_Area_and_Perimeter_Asessement_Results

Then came the post-learning assessment after the lessons:

Type__File_folderDate_modified__14_07_2014_15_03Size__34_6_MB__Files__3_3_Drawing_Linear_Graphs______and_Microsoft_Excel_-_13H_Area_and_Perimeter_Asessement_Results

The first thing that struck me was how poorly students had retained their knowledge from the sectors lessons. Know thy impact is Hattie’s mantra. This truly showed me, brutally, how poor my impact had been in those lessons. Bad for the ego, but great for an objective reflection to see how well you’ve taught things. This also confirmed my long-held belief about how using work in students’ exercise books to make APP-style decisions about learning progress is totally unreliable. The students’ books were full of beautiful sectors calculations, all presented well and accurate. By looking at their books you would conclude their ‘learning’ of sectors was as good as their ‘learning’ of all the other topics. It wasn’t. They didn’t learn the sectors work well at all despite what their books would tell you.

This naturally forced me to raise the question ‘why was their learning from the sectors lesson so poorly retained?’ I considered various options such as:

  • was it the quality of my explanation?
  • was it the lesson structure?
  • were my activities incorrectly pitched?
  • had I overloaded their working memory during my explanation?
  • had I rushed this topic and not used AFL correctly to tell me when to move on?

There of course, could be endless other explanations, but I concluded with confidence that the way I built up the explanation was at fault. I went into the lesson with the hook ‘how many ghosts can Pacman eat?’ to engage pupils with the idea of thinking about what proportions of circles comprised the compound shapes of Pacman and the ghosts. From that I went into the sector formula etc. I realised in hindsight the step here was too quick and it is likely the students didn’t see the link and so were confused when working with the formula. I think the use of an early context ‘fogged’ their focus on learning how to use the formula and where it came from.

I then retaught the lesson taking a different approach (talking about fractions of circles without angles first before then developing how you can use angles to describe these) and progress was clearly much better after a summative check a couple of days later. The alternative explanation had worked.

The yellow objectives in the table were hit regularly during starters over the following week to ‘iron out’ the final few issues a small number of students were still having with them.

It was naturally disappointing to see that students hadn’t retained as much as in-class AFL told me they had. It hurts the pride somewhat, but once you get over that and accept this is the impact you’ve had, you feel enlightened and know exactly where to focus your intervention efforts. I feel closer to the ‘learning pulse’ than ever before and can ‘visibly see’ (Visible Learning!) what I’ve taught well or not in a way I never have before. I felt empowered to target my next few lessons to plug the gaps, knowing exactly where they were.

The biggest revelation I’ve had from this so far is that in-class AFL (using mini-whiteboards etc), even if done using summative testing does not tell you what has gone into students’ long-term memory. It tells you what has gone into either working memory or long-term memory but not which one. It doesn’t tell you what they’re going to retain. The only way to find out what went into long-term memory is to summatively asssess a few days later. In-class AFL is essential for checking understanding that day and directing lessons at key hinge points, but it is no substitute for summative assessment a finite time after the lesson to find out what was ‘learned’ rather than what was just ‘understood and copied’. You need both. Making progress judgements by looking in exercise books is unreliable for the same reasons as in-class AFL.

My pride was bruised by the post-learning assessment above, but the more I live ‘Visible Learning’, the more I understand what Hattie’s talking about as high-impact practice.  I have so much more clarity on what students have learned than ever before. I know when lessons have gone well. I know when they’ve gone poorly despite my ego telling me otherwise. I feel like I’ve been teaching blind until now! I’m ‘plugged in’ to the ‘real learning’ more than ever and make fewer assumptions. I teach based on what they know rather than what I assume they know. It’s made me a more reflective practitioner and more objective.

There’s certainly something in this Visible Learning!

Next step is to trial this with all my KS4 classes next year to see if the model will work across the attainment spectrum in a sustainable way.

Fun times.

 

9 Responses

  1. Austin Booth says:

    Good post. I’ve also found that AfL and book marking has misled me to think students would retain more than they do. I’ve done a similar analysis for a Science topic test (see http://eviedblog.wordpress.com/2014/08/03/making-better-use-of-assessments-part-2/ ).

    I want to introduce a pre-test like you’ve done for my classes next year. After teaching the topic it might be nice to give out the completed pre-tests again for students to correct and complete (in a different pen colour) so they can see their own progression and what areas they need to focus their revision on before the post-test. However this would mean that the same test can’t be used for both pre and post-test.

  2. Mr Scotney says:

    Does it matter exactly how long you leave the summative assessment from completing the sequence of lessons? Also, how long was there between the end of the first cycle and the reteaching phase?

    • Hi. I’m still experimenting with this myself. My feeling at the moment is the post-learning assessment needs to take place approx 2 weeks after the end of teaching the unit. Give pupils some time to revise the content before the assessment (e.g. as the homework the week before). Then set the post-learning assessment following that. There certainly needs to be some ‘forgetting time’ between the end of teaching and the assessment so you assess what they’ve ‘learned’, not what they just ‘understood the day you taught it’. Of course they’ll need to revise the content again before the GCSE, but I feel if they can demonstrate mastery of the content 2 weeks after last teaching (with some revision) it’s a good indicator of whether they’ll be able to recall it (with further revision) at the end of the course.

      • Mr Scotney says:

        I certainly agree with that. I’ll certainly be adapting (i.e. stealing!) this approach in the coming year, but I also wonder if it might be appropriate to build on it and to keep drip feeding some of the topics even after the test – not in a formal testing manner, but more in starters/plenaries/homeworks etc. It certainly can’t hurt, I’m sure, and will only aid with embedding it further into long term memory. I’m sure that’s what Daniel Willingham means in ‘Why Don’t Students Like School?’

        Alex

  3. Ollie Orange says:

    Very surprised to see you referencing Hattie. He’s a Psychologist who uses Statistics that no Mathematician has ever heard of. He’s also admitted (quietly) that half of Visible Learning is wrong. Further reading – http://ollieorange2.wordpress.com/

  4. Calum Blair says:

    Only just re-discovered your website so I’m doing a bit holiday catch-up here!

    I’ve got a couple questions:

    1) Our school is starting to look at the concept learning conversations: I really like the idea of a pre-test and post-test – how do you feel it is impacting in terms of the discussions you are having with pupils and then their tracking of their own progress?

    2) I’m looking at a plethora of ways for pupils to track and monitor their own progress in maths in line with our curriculum – have you got any ‘words of wisdom’ in terms of how we might do this?

    Have a great Christmas

    Calum

    • Hi Callum,

      Thanks for getting in touch.

      1) I’ve had mixed experiences with pre-and-post-testing so far to be honest. It takes time to do it, so when you have a unit that students score low on the pre-test you are then ‘behind the curve’ and feel pressured to keep up with the scheme. Then you have to post-test this unit and pre-test the next. I have found it quite time consuming and what I gain in lessons that the pre-test tells me I don’t need to teach, doesn’t seem to counterbalance what I lose in time through doing it. However, I’m only doing it with one of my classes this year, a mid-low attaining year 9 class. I tried it with a mid-high attaining class last year and didn’t find time an issue.

      I see value in it, but you’ve got to have the time to do it if you’re going to use it with a mid or low attaining class. If you are measuring the progress at the expense of making the progress (by not having time for deep learning) then we need to question its value.

      What is interesting is how the students are very positive about it. I’m going to do some proper student voice soon to learn more about their thoughts, but they certainly prefer when I do the pre-and-post-testing. They have said it helps them understand their progress better.

      A final thought I would share is you need to be really clear with yourself when you are going to do the post-test so you are clear what you are testing there; immediately at the end of the unit?, with revision?, a month later? etc. I was very disappointed at first with how little the students seem to have definitely ‘learned’ as evidenced by the post-tests. When I looked at their post-tests I realised they were ‘nearly there’ on most questions. It reinforced with me the importance of constant revisiting and consolidation practice of earlier skills. I’d recommend doing the post-tests a month or two after the unit when you have done this consolidation as the students seem to do better which makes them feel better about their progress and more motivated.

      2) To be honest, we haven’t figured this out yet. We’ve seen a lot of other schools have a go and basically create a whole new levelling system which they then need to calibrate with the existing KS3, KS4 and the new KS4 ones! It’s added more confusion as parents and teachers try to figure out what ‘my school level 17b’ equates to in terms of old and new GCSE grades etc.

      I have done some prep work. I gave top, middle, and low sets in year 7 the new AQA foundation paper (9-1) and found it a suitably accessible/challenging assessment for all year 7s. This gives us the opportunity to put students on the 9-1 scale right through KS3-4 if we choose to which would be simpler than the current alpha-then-numeric system.

      My current view, which may change of course, is that we should keep the current assessments we use (KS3 SATS and KS4 GCSE papers), but importantly switch to not telling parents or students levels or grades scored on the assessments. These levels and grades mean something to teachers, but not to parents. I think we should report: target, projected final grade if things carry on as they are and what specific topics the students need to work on to move forward. Tell the teachers the current attainment level/grade because they have a feeling that ‘a student who will end up with an A* should be approx a B+ or better by the end of year 10’ etc, but the parents don’t. An analogy would be if you have a blood test, you doctor doesn’t read you the figures, they interpret them for you and tell you whether you’re healthy or not. If not, they tell you why and what to do to improve the situation. We need to stop telling parents ‘your child is a C- at the moment’ and then not giving them the means to interpret that. I currently feel we should just give them the interpretation as I spend too much time explaining to both students and parents that progress isn’t linear and they accelerate once they start past papers.

      To summarise, I currently feel:
      Keep current assessments (and grades/levels) but only report the grades to the teachers. Possibly switch to new 9-1 GCSEs for KS3.
      Report to students and parents: target, current projected grade if things carry on as they are, topics to work on to raise this projected grade higher, attitude score (and context).

      Hope that helps!

  1. 13/08/2014

    […] glimpse through some of my bookmarked Tweets from others found this post from WIlliam Emeny, and this follow up, which although containing fantastic ideas in their own right (which I will be trialling […]

  2. 27/08/2014

    […] the blogs I have read this summer, the one I keep going back to the most is William Emeny’s Experiments with Visible Learning (this is the second part of a two part blog by William, so you might like to read part 1 for the […]

Leave a Reply

Your email address will not be published. Required fields are marked *