Student response to teacher feedback

Students responding to the feedback we give them is an important part of the ‘feedback loop’ and something I’ve often found challenging to get going well with classes. This year I’m trying to be much more specific about my expectations by giving students the following place mat explaining the tasks I expect them to complete in responding to my feedback:

Responding to feedback mats- GMTI

Click here to download a pdf version of the mat.

The idea is a class set of these are printed onto A3 and laminated. Students get these out a few times over a term, particularly after you have marked their books.

Feel free to try them out with your classes. Let me know how you get on in the comments section below. If you have a really good example of student work where they’ve been used well send a pic by email to my address in the ‘contact’ section and I’ll upload it to this post.

If you like this post, you might also be interested in my self/peer assessment writing frames.

Going SOLO on the journey towards deep learning

The Structure of Observed Learning Outcomes Taxonomy (SOLO) is a notion that describes the stages of learning that students go through to reach a real depth of understanding on a topic. It outlines the journey from surface to deep learning. SOLO is John Hattie’s taxonomy of choice and is currently being studied in depth at his Visible Learning Labs (Osiris Educational Outstanding Teaching Conference, 2014). It is seen by Hattie and other academics as having many advantages over other taxonomies, in particular that of Benjamin Bloom. Quoted advantages over Bloom’s Taxonomy include:

  1. The SOLO Taxonomy emerged from in-classroom research whereas Bloom’s Taxonomy was theorised from a proposal by a committee of educators
  2. SOLO is a taxonomy about teaching and learning vs Bloom’s which is about knowledge
  3. SOLO is based on progressively more challenging levels of cognitive complexity. It is argued this is less clear within Bloom’s Taxonomy.
  4. It is claimed that educators and students agree more consistently which level a piece of student work has reached on the SOLO Taxonomy than on Bloom’s Taxonomy
  5. SOLO is more simple to understand and apply than Bloom’s making it more accessible for students to grasp, even primary phase.

Whilst interesting from a mostly academic perspective, these advantages are unlikely to grab the coal-face busy professional teacher and convince them to go SOLO in their planning. I had the same thought originally until I understood how incredibly simple SOLO is and that it seemed to ‘work’ for a maths classroom a lot better than Bloom’s Taxonomy does. It makes sense to me as a good summarisation of what I have learned from experience as the way students learn in maths.

Image via pamhook.com. Check out her fab website with lots more info on SOLO

Image via pamhook.com. Check out her fab website with lots more info on SOLO

SOLO works by students progressing from surface learning to deep. On a particular topic they can be at any of these particular levels:

Pre-structural- The student has no understanding of the task. They completely miss the point.

Uni-structural- The student has ‘learned’ one aspect of the topic.

Multi-structural- The student has ‘learned’ more aspects of the topic. However, they see each of the aspects as independent and unrelated

Relational- The student understands the links and relationships between the different aspects of their previous learning within the topic.

Extended abstract- The relational learning is so well understood students can now start using this to conceptualise further learning outside of the topic domain.

Describing progression within a maths topic such as angle facts this could look like:

Pre-structural- The student has heard other people talk about angles.

Uni-structural- The student can estimate the size of an angle

Multi-structural- The student can estimate the size of an angle, measure the size of angles and has learned the angle facts

Relational- The student understands how estimation can be used as a way to check that they’ve read off the correct scale when measuring an angle. They understand how to use angles in parallel lines rules to prove angles in a triangle add up to 180 degree. From this they can then derive the sum of the interior angles in an octagon.

Extended abstract- Students can apply their angle fact knowledge to solve geometrical problems where the angles are algebraic expressions and the solution requires the formation and solution of equations.

The pre-structural, uni-structural and multi-structural levels are considered ‘shallow’ learning, relational and extended abstract are ‘deep’ learning. One of the most important things to understand about SOLO is that it describes a journey. You have to progress through the levels and cannot jump straight to deep learning. As Hattie put it recently, “you can’t do the deep stuff until you know the shallow stuff.” You can’t link things together until you have things to link together. This is one of the reasons ‘problem-based learning’ strategies score so poorly in his Visible Learning rankings (0.15 ES where 0.4 is mean average). In Visible Learning, he wrote, “…this is a topic where it is important to separate the effects on surface and deep knowledge and understanding. For surface knowledge, problem-based learning can have limited and even negative effects…” You can’t solve problems until you are fluent in the skills required to solve the problem. It’s like trying to solve a jigsaw without having the pieces.

A consequence should be that teachers know it is ok to have some ‘shallow learning’ lessons where students are simply trying to acquire and become more fluent and accurate at the skills. I wonder sometimes, particularly with newer teachers that expectations in some schools for formal observations to feature open, higher-order thinking questions and tasks, that those teachers think this is where every lesson should be pitched. Progress should be viewed as successfully moving through-or-between any level in the taxonomy. ‘You need the range of lessons,’ should be the message to new teachers; dedicated skill-and-drill practice has its place as do higher-order thinking lessons. They complement each other.

However, for the remainder of this post I want to focus on the deep learning stages: relational and extended abstract. Rather than getting bogged down with edu-jargon, the message is simple- deep learning is about links. Firstly, links to other things within the same topic, then to things outside the topic. 

I’ve been trying to think of all the different types of links that we can use to design deep learning resources for a topic. The intention is to produce a prompt-sheet or checklist of things I (or any GMTI readers!) could refer to when planning a deep learning lesson. The following list will not be comprehensive, but is the result of this idea bubbling away in my mind for the last six months. Please feel free to add, build or challenge my thoughts in the comments section. Here goes, first draft!

I think ways we can build ‘deep learning’ into our lesson planning include:

  1. Link to a concrete representation- How can the problem be represented using physical equipment? This could also be a kinaesthetic representation using other senses e.g. hearing or touch.
  2. Link to a visual diagram- How can the problem be represented using a diagram?
  3. Link to metaphors- What different metaphors can be used to describe the concept? What are each metaphor’s limitations?
  4. Link that helps you understand another part of maths within the same topic better- How can one idea help in understanding another within the same topic?
  5. Understand the limitations- Does it work for all types of numbers? What range of values in the answers would we expect? Where does the maths still theoretically work, but in real life it becomes impossible?
  6. Understanding ‘dynamic variation’- What role does each part play? If I double this, what happens to the answer? If I halve that, what happens to the answer? Which part has the largest effect?
  7. Reverse engineering a question- Can students create a challenging question that has a specific answer and also meets additional criteria you set them?
  8. Comparing different solution methods in terms of their efficiency- What different solution methods are there and when would each one be more efficient that the other?
  9. Historical links and significance- Where did this maths come from? Who discovered it? Was it discovered out of necessity to solve a particular problem or just as a curiosity?
  10. Link to a real life context including other subject areas- What real life examples and contexts can we ask questions about? What other subjects within school can we link this to?
  11. Link that helps you understand another area of maths outside of the topic better- How can one idea help in understanding another within another topic domain in maths?

In the coming weeks I intend to build example resources demonstrating these ideas. I hope to create both the prompt sheet and examples of resources for particular topics in the hope to spark debate. For now, however, I think this is enough to open the discussion. Please do contribute in the comments section below!

Have I missed any ways of  forming links? I am working on the assumption that a worksheet resource based on building 11 different types of links will develop deeper learning that 11 questions on the same type of link. Is that reasonable? The 11 suggested types of links are in a particular order as I think this reflects the movement from relational to extended abstract. Do you think the order is right?

With thanks to:

http://pamhook.com You should certainly check out her fab website for more detailed info on SOLO.

Famous numbers sequences card sort

Here’s a card sort I made to introduce my new year 7s to famous number sequences. Famous number sequences card sortThe idea is students cut out and group the cards into 6 famous numbers sequences:

  1. Square numbers
  2. Cube numbers
  3. Triangle numbers
  4. Fibonacci Sequence
  5. Even numbers
  6. Odd numbers

Each group should contain:

  1. Name
  2. Pictorial representation
  3. The numbers
  4. A fact about the sequence
  5. An explanation of how the sequence is produced

Click this link to download the resource as a pdf: Famous number sequences card sort

*Thanks to Wikipedia and the BBC for the images.

Vocab Blitz- improving student literacy

Literacy is every teacher’s fight, no matter what subject we teach, even maths. Textbooks often have keywords and, sometimes, definitions. However, rarely will students learn vocabulary from just these, in my experience; they need to work with the vocabulary.

Two activities I tried last year were “keyword bingo” and “taboo words”. In the former you read definitions whilst the students circle them on a bingo grid. In the latter students have to describe a keyword without using other “taboo words” written on their card.

Students enjoyed these activities but I’m not sure the tasks were that effective at increasing the students’ fluency with the vocabulary. The reason is that the activities were not leading the students to do what we ultimately want them to do- use the vocabulary within self-constructed sentences with automaticity.

An INSET we received a couple of years ago emphasised the importance of developing students’ spoken fluency before then working on getting it down on paper. “Talk like a book” was the mantra. Since building this into my practice I have been trying to find an activity which can bridge between “talk like a book” and getting students to write with vocab fluency. I think I’ve found it and I’m going to call it “Vocab Blitz”…!

The idea is you give students a paragraph that contains waffle instead of the concise vocabulary, for example:

The shape I’m describing has four straight sides. Two of the sides point in the same direction and wouldn’t cross if they carried on forever. You can’t fold the shape in half so each side perfectly matches up with the other. If you turn the shape around it won’t look the same as it did at the start until you’ve gone all the way around.

The idea of Vocab Blitz is that the students need to rewrite the paragraph with as few words as possible by substituting the waffle words with mathematical vocabulary, for example:

A trapezium is a quadrilateral. It has one pair of parallel sides, no lines of symmetry and no rotational symmetry.

The winner is the student who retains all the information from the original paragraph but rephrases it in correct English with as few words as possible!

I like this activity because it mirrors what students do when “talking like a book.” They often waffle at first then improve their sentence by inserting mathematical vocabulary in subsequent efforts. Vocab Blitz is essentially copying this process but in written form. It encourages students to improve on written drafts. Once they’ve done a few Vocab Blitz activities they could transition to writing the first drafts themselves then improving them…

Experiments with Visible Learning showing big promise…

This post builds on my last one about a model for assessing without levels using the principles of John Hattie’s Visible Learning philosophy. Make sure you’ve read my last post before this one.

The first thing I’d like to share is that my worries about time constraints preventing me from using this model were unfounded. After some terrific Twitter feedback I realised the pre and post learning assessments could be done for homeworks. The students see the point in them and so approach them with more focus than they do a traditional homework.

The pre-learning assessment results have significantly impacted my teaching. See below. These are the results from a pre-learning assessment I did with a year 9 class at the start of a topic on area and perimeter. These results changed how I approached the teaching of the lessons on this topic. See the colour coding. There were three lessons I didn’t need to teach at all as the students remembered those from KS3. The lessons I’ve coloured blue were taught using collaborative learning strategies (Kagan etc). Half the students knew how to do them so I used these students to teach the others (reciprocal teaching is a big yes from Hattie). The yellow lessons were taught using direct instruction methods (not dirty words in Hattie’s world). High quality formative feedback genuinely leading my planning.

Microsoft_Excel_-_13H_Area_and_Perimeter_Asessement_Results

Then came the post-learning assessment after the lessons:

Type__File_folderDate_modified__14_07_2014_15_03Size__34_6_MB__Files__3_3_Drawing_Linear_Graphs______and_Microsoft_Excel_-_13H_Area_and_Perimeter_Asessement_Results

The first thing that struck me was how poorly students had retained their knowledge from the sectors lessons. Know thy impact is Hattie’s mantra. This truly showed me, brutally, how poor my impact had been in those lessons. Bad for the ego, but great for an objective reflection to see how well you’ve taught things. This also confirmed my long-held belief about how using work in students’ exercise books to make APP-style decisions about learning progress is totally unreliable. The students’ books were full of beautiful sectors calculations, all presented well and accurate. By looking at their books you would conclude their ‘learning’ of sectors was as good as their ‘learning’ of all the other topics. It wasn’t. They didn’t learn the sectors work well at all despite what their books would tell you.

This naturally forced me to raise the question ‘why was their learning from the sectors lesson so poorly retained?’ I considered various options such as:

  • was it the quality of my explanation?
  • was it the lesson structure?
  • were my activities incorrectly pitched?
  • had I overloaded their working memory during my explanation?
  • had I rushed this topic and not used AFL correctly to tell me when to move on?

There of course, could be endless other explanations, but I concluded with confidence that the way I built up the explanation was at fault. I went into the lesson with the hook ‘how many ghosts can Pacman eat?’ to engage pupils with the idea of thinking about what proportions of circles comprised the compound shapes of Pacman and the ghosts. From that I went into the sector formula etc. I realised in hindsight the step here was too quick and it is likely the students didn’t see the link and so were confused when working with the formula. I think the use of an early context ‘fogged’ their focus on learning how to use the formula and where it came from.

I then retaught the lesson taking a different approach (talking about fractions of circles without angles first before then developing how you can use angles to describe these) and progress was clearly much better after a summative check a couple of days later. The alternative explanation had worked.

The yellow objectives in the table were hit regularly during starters over the following week to ‘iron out’ the final few issues a small number of students were still having with them.

It was naturally disappointing to see that students hadn’t retained as much as in-class AFL told me they had. It hurts the pride somewhat, but once you get over that and accept this is the impact you’ve had, you feel enlightened and know exactly where to focus your intervention efforts. I feel closer to the ‘learning pulse’ than ever before and can ‘visibly see’ (Visible Learning!) what I’ve taught well or not in a way I never have before. I felt empowered to target my next few lessons to plug the gaps, knowing exactly where they were.

The biggest revelation I’ve had from this so far is that in-class AFL (using mini-whiteboards etc), even if done using summative testing does not tell you what has gone into students’ long-term memory. It tells you what has gone into either working memory or long-term memory but not which one. It doesn’t tell you what they’re going to retain. The only way to find out what went into long-term memory is to summatively asssess a few days later. In-class AFL is essential for checking understanding that day and directing lessons at key hinge points, but it is no substitute for summative assessment a finite time after the lesson to find out what was ‘learned’ rather than what was just ‘understood and copied’. You need both. Making progress judgements by looking in exercise books is unreliable for the same reasons as in-class AFL.

My pride was bruised by the post-learning assessment above, but the more I live ‘Visible Learning’, the more I understand what Hattie’s talking about as high-impact practice.  I have so much more clarity on what students have learned than ever before. I know when lessons have gone well. I know when they’ve gone poorly despite my ego telling me otherwise. I feel like I’ve been teaching blind until now! I’m ‘plugged in’ to the ‘real learning’ more than ever and make fewer assumptions. I teach based on what they know rather than what I assume they know. It’s made me a more reflective practitioner and more objective.

There’s certainly something in this Visible Learning!

Next step is to trial this with all my KS4 classes next year to see if the model will work across the attainment spectrum in a sustainable way.

Fun times.

 

How do we make John Hattie’s “Visible Learning” work in maths?

visiblelearningVisible Learning is John Hattie’s mantra. I’ve written previously about being a big fan of Hattie’s work about what affects achievement. He’s collected just about every piece of academic research and collated a ranked taxonomy of  factors that affect achievement. Visible Learning is his suggested approach to teaching in a way that incorporates many of the significant drivers of achievement.

He sums up what Visible Learning looks like using the analogy of teaching someone to abseil. The main features of the learning being:

  1. The success criteria is explicit. The learners not only know what they’re trying to achieve, but also what success looks like. Giving learning objectives isn’t enough, they need to see what success looks like.
  2. The tasks are challenging. Trusting in the rope to support your weight requires real trust. Learners find it extremely challenging, but when they complete it the first time they get a real sense of satisfaction and have the hunger to repeat it again.
  3. Student expectations. Ask students to state how they think they will do before they start a task. Human nature is for them to play safe in their prediction. When they exceed their forecast, their belief in themselves as a learner increases. This ratchets up over time and their expectations of themselves rise. This has by far the highest ranking effect size in all of Hattie’s findings.

Know thy impact is another of his mantras. He argues that formative assessment is vital in quality teaching and teachers should constantly be using evidence to reflect on the impact their practice had on their learners. Hattie says assessments are more for teachers than students; they are for you to find out what you taught well and to whom. This reflective, evidence-based mindset, he argues captures the essence of what educational research concludes has a high-impact on achievement. Teaching is to D.I.E for, says Hattie: Diagnose what they do/don’t know, Intervene, Evaluate your impact. Repeat.

I’ve been experimenting with an approach to teaching maths that meets the Visible Learning criteria and allows me to systematically know my impact by using the D.I.E philosophy. This has been the biggest step forward in my practice this year and the learners really like it. The problem is, I can’t see a way of embedding it permanently in my practice. Let me explain…

At the beginning of a topic students are given this sheet that gives the learning objectives.

1

The students then make a prediction of how many questions they will get right on the pre-learning assessment and enter this number into the table.

The students then sit the pre-learning assessment:

Screenshot_13_06_2014_19_58

This is class-marked and the students then highlight the appropriate smileys on the tracker sheet (after the pre-learning assessment, not before it) to record their success against each learning objective on the pre-learning assessment. They update the score in the table with what they actually got. Finally a grid is passed round with the students’ names vs the learning objectives. They tick and cross the grid so I then have a record of how each student did on each question.

I plan my lessons based on this pre-learning assessment feedback. I structure my lessons based on who knows what. I know exactly where to pick up from to avoid teaching things they really already know.

After the series of lessons student predict how they will do on the post-learning assessment, sit it, record their actual score, update the smileys and set targets about anything they still haven’t mastered.

The students love it. Comments include: “It really helps me understand what I’ve got to learn”, “It makes me realise that I am actually making progress in maths even when my grade on the exams isn’t going up every time”, “It makes the things I’ve got to learn a lot clearer” and “the assessments help you figure out what you do and don’t know. Stuff I thought I knew, I found out I didn’t and the other way round too. It’s been really useful”.

There are other things I really like:

  1. Using formative feedback from assessments has allowed me to be much more diagnostic and really impacted on my lesson planning. Rather than making assumptions about where to pick up from, I know without any doubt now. Things I would have spent whole lessons on before are now little-and-often starters. The pitch of my activities has changed. When I can see they have visited things before I set deeper learning activities much earlier than I would have previously. What was a chatty class are suddenly much more focussed and on task more regularly. I’m pitching work much better than ever because of the better than ever information I have on what students already know.
  2. I’ve never been a fan of giving students a list of learning objectives and asking them to RAG (red/amber/green) how good they think they are at them. RAG-ing doesn’t account for student ignorance! The conscientious, high attaining students underestimate what they can do and the overly-confident, lazy ones overestimate. By doing the RAG-ing based on what they can do on an actual assessment, the results are much more accurate. This is confirmed by the score students predicting they will get prior to the pre-learning assessment and the actual score they get being different; significantly so in many cases.
  3. I think the approach captures the spirit of the forthcoming assessing without levels reforms. The pre and post learning assessments document progress in a formative portfolio. Targets can be set based on accurate diagnostic assessment of their weaknesses. Progress is very clear to external observers, me the teacher and to the students.
  4. Supplying the learning objectives in conjunction with the pre-learning assessment really gives students focus from the start. They see the learning journey and buy into it. “I can’t wait until we get to the trapezium lesson”, was one student’s comment after the pre-learning assessment. I’ve never had students consistently looking forward along their learning journey like this before.

All sorted then, the world is fixed! No. I just can’t see a way of making it work all the time. I’m not trying to be defeatist. It’s not the preparation involved, it’s simply the time required to run it. There are 30 modules on the scheme of work, 352 learning objectives and it is already a struggle to cover the course content within the allotted time. Two extra lessons for the pre and post learning assessments mean 60 more lessons needed over the KS4 course. It’s simply not possible to sacrifice 60 lessons to this and still cover the course content. Do you agree?

This has been an issue causing me great frustration recently. The learning is so good with this system. It encapsulates so much of what Hattie says really boosts achievement and I have seen at first-hand why. Visible Learning is really, really good learning.

I want this to be something that is practical. I can’t see where the 60 lessons are going to come from. Schools just can’t do it. In theory, because you don’t need to teach everything (some topics students already know are identified on the pre-learning assessment) there are some savings here, but it’s not enough. Perhaps set the pre and post learning assessments as homeworks? That would have pitfalls! Where’s the time going to come from?!

I’m out of ideas currently. Do you have any? How could I make this work? Please make any suggestions you have in the comments section.

John Hattie- Why are so many of our teachers and schools so successful?

visible-learning-infographic-john-hattie-studie-460x400

I’m a big fan of John Hattie‘s work. Many people have big ideas to improve achievement in education. What makes John’s work unique is that he tells you by how much things improve achievement. In a time-pressured world, what makes his work so useful is that he tells you what to prioritise and focus on, and importantly, what to ignore.

I was fortunate to see him in the flesh a couple of weeks ago at the brilliant Osiris Educational Outstanding Teaching Conference 2014. He’s a guy certainly knows his stuff, worth listening to, and cuts through the fads and fashions to the real evidence data on what works. His conclusions are heart-warming.

For a summary of the main messages of his work, here are some great videos:

Visible Learning Part 1- Disasters and below average methods

Visible Learning Part 2- Effective Methods

Why are so many of our teachers and schools so successful?

Enhanced by Zemanta

Brainfeed- fantastic educational videos app for kids

Brainfeed_Splash

I highly recommend you check out the Brainfeed app in the iOS appstore. It is essentially a collection high-quality educational videos about a wide-variety of topics. For example: How does the brain work? Does space go on forever? How can you outrun a cheetah? How big is the ocean? There are mathematically themed videos too.

The strength of the app is the quality of the videos and that you know they will be appropriate for your young learners. Rather than letting your children search on Youtube and potentially come across inappropriate material, you know with the Brainfeed videos that they’ll be entertaining, engaging, educational and appropriate.

They remind me a lot of the TEDEd project; videos I have regularly featured on this blog. Similar to these, the Brainfeed videos are top class!

Here are some sample screenshots:

IMG_3100

IMG_3114

Robocompass.com

There’s a new geometry tool in town and it goes by the name of Robocompass.com. It’s a tool that shows geometrical constructions in a 3D environment, rather than the 2D plan view used by Geogebra, Geometer’s Sketchpad etc. In addition to a wealth of common construction examples, you can program Robocompass to make your own constructions using an easy to learn language.

Here’s the video tour:

Enhanced by Zemanta

Don Steward’s MEDIAN blog- fantastic isometric/ plans and elevations/ nets resources

Don Steward keeps on churning out his amazing resources! I can’t recommend his blog, MEDIAN, highly enough. If this guy wrote the textbooks/ worksheets/ exams our curriculum would be so much more challenging (in a good way) and mathematically thought-provoking. He’s inspired my own practice a lot. Below are links to three brilliant sets of resources he’s produced recently. Click the links to visit his site and download the resources from there.

Isometric Pictures

Picture1-1 Picture1 Picture6 Picture8

Plans and Elevations

Picture13 Picture1-1 Picture5 Picture1

Net Tasks

Picture1 Picture2 Picture3-1 Picture3

 

Enhanced by Zemanta

Benoit Mandelbrot talking about fractals in the real world

Ever heard of the Mandelbrot Set? It’s a famous fractal discovered by Benoit Mandelbrot, the father of Fractal Geometry. In this fascintating TED talk he explains his Theory of Roughness and how fractals can be found all around us in: cauliflowers, the stock market, mountainous landscapes and much more…

Fractals and the Art of Roughness

;

Enhanced by Zemanta