5.15 Feedback

In surveys of student satisfaction in the UK in the years 2005-2012, one of the most consistent problems identified was that of inadequate feedback on assessment. This simplistic statement hides a number of potential issues. What do we mean by feedback?  What do the students consider to be feedback?  When should they receive it?

A straightforward attitude would be that students deserve feedback on all work which they submit, and they need it in time to act on it before they make the same errors again. The implications of such a policy might include:

  • Teachers should clarify the nature of ‘feedback‘  while being open with their students about the difficulties of providing it;
  • Feedback might take the form of:
    • written marginal comments on each piece of work;
    • boxes indicating common errors ticked on a cover sheet;
    • verbal feedback to the next meeting of the class;
    • comments posted on the module VLE;
    • exam scripts returned with indicative answers;
    • tutorial sessions;
    • peer marking against a set of criteria;
  • Feedback should include comments on positive as well as negative points. Students need to know ‘Why did that get such a good mark?’ as well as ‘Why did I get such a poor mark?’;
  • The timeliness of feedback is a constant concern. Many universities now have a guideline response time of about 3 weeks. In my view this is too long, especially in engineering where much learning is consecutive. You might like to set yourself a target response time – say one week – and then consider what method you can use within this timescale, given the class size and the amount of assistance available to you. I know that this sounds pious, but it must be true that if you are going to give feedback, then it takes the same total time whether you do the work over one week or three. Since it is more useful for the student in one week, when they have a chance of being able to remember the exercise, this should be your target;
  • Staff need feedback too. Teachers need to know what was difficult and what was easy, and ideally why common errors or mis-conceptions occurred.  You should regularly ask feedback questions (e.g. at the end of a test, as suggested above) and review the errors students make
  • You might need to argue with your examinations officer or university procedures in order to return marked exam scripts to students. My view is that students deserve feedback on their exam performance, in order to improve their approach to the next semester or year, or to the life-long-learning they will engage with immediately on graduation. You will have to contend with colleagues who will assert that feedback is unnecessary on summative assessments such as end-of-module exams. Tell them that this is nonsense – all assessment should be formative.

There is an emerging view that we might be placing too great an emphasis on feedback, at least in the forms I have described above. A view espoused by Royce Sadler (2010), and which seems to me to have a lot of merit, is that most feedback (for example marginal comments on a piece of written work) suffers from two key drawbacks:

  1. The student may not understand what you mean (even if you think it is straightforward, such as ‘this does not follow from what you wrote earlier’); and
  2. You are essentially ‘telling’ the student rather than involving her. This is the antithesis of the active learning which I have championed in the rest of the book.

The first point is worth expanding further upon. You and I, as assessors, see hundreds of pieces of submitted work and have had the chance to develop a quite sophisticated and subtle appreciation of the strengths and weakness of student submissions. We are therefore internally calibrated and are constantly making comparisons. We know what a ‘good’ answer looks like, and a bad. We know that there are many ways to get it ‘right’ and many ways to get it ‘wrong’ (whatever ‘it’ is). The student has probably only ever seen a single example, her own submission, and is unlikely to recognise the potential range of answers. There is a strong case for peer assessment, just so that each student gets a chance to see a range of submissions of different quality. You probably want to anonymise submissions before circulating them to students, but this also gives you the chance to mix in an example that you have written yourself, or a good answer from a previous year. Sadler also suggests that you do not issue marking criteria to students. This rather counter-intuitive advice acts to force the students to consider for themselves what it is that makes a piece of work ‘good’. If you took my earlier advice and read Zen and the Art of Motorcycle Maintenance (1974) you will recognise this strand of thought.

The second point is also worth further consideration, although solutions are rather  difficult. You might feel that the best way to introduce ‘active’ feedback is to discuss the work with the student. However you are unlikely to be able to make the time to do this, especially with large class sizes. In such cases one of your only options is to get students to share and assess their peers’ submissions. This is probably best done in groups of 5 or 6, and can then be carried out almost in real time. A further possibility is to assess using a tool which provides instant feedback, such as Pearson’s Mastering Engineering resources (2010).

Oliver Broadbent writes: I have recently been marking blog posts that students have been writing as a reflective exercise during a week-long design scenario at UCL. I recognise what you say about teachers constantly comparing one piece of work to another. I also recognise the point you make about ‘Zen…’ and the difficulty of defining Quality, even though I did write a marking rubric. Next year I am going to experiment with making the blogs visible to everyone in the class. The idea is to build up a class understanding, or agreement, or what good looks like. This would also be an opportunity to bring in peer-to-peer assessment of student writing.

Read on …  (but first please add a comment)

2 All Responses to “5.15 Feedback”

  1. Oliver Broadbent

    I have recently been marking blog posts that students have been writing as a reflective exercise during a week-long design scenario at UCL. I recognise what you say about teachers constantly comparing one piece of work to another. I also recognise the point you make about ‘Zen…’ and the difficulty of defining Quality, even though I did write a marking rubric. Next year I am going to experiment with making the blogs visible to everyone in the class. The idea is to build up a class understanding, or agreement, or what good looks like. This would also be an opportunity to bring in peer-to-peer assessment of student writing.

    Reply
  2. Peter Goodhew

    I will follow your experiments with great interest!

    Reply

Leave a Comment