There are many ways in which ‘legacy’ university systems are ill-suited to support modern engineering education. While it has proved difficult to change many of these (not helped by conservatism from professional bodies involved in accreditation) a first step is to identify aspects of our current systems which do not match our educational aspirations. My own favourites are listed below, with comments, but I am sure that there are more. I expect my approaches to be, at the same time, both too radical to be acceptable and not radical enough to get it right! See what you think:
Exam pass mark: It is common for every element of a student’s programme to have the same ‘pass mark’ – commonly 40%. This usually takes no account of the nature of the assessed item. In the context of engineering I would assert that some things have to be understood – for example the second law of thermodynamics is a central concept to much of engineering. Other topics may have a place in the curriculum for reasons of variety, or illustration, or as demonstrators of depth of understanding or as extension material, but are not essential for a graduate engineer. Surely it would be logical to make the ‘pass mark’ for essential concepts, competencies and skills 100%, while a lower value was acceptable for non-essential material?
Let’s put it another way. How could you defend allowing a student to graduate who demonstrably does not understand 60% of the concepts which you regard as essential?
So why do most institutions not have elements of assessment with a simple pass/fail criterion (either you can do it or you can’t) or a 90% pass mark (e.g. for a whole exam or parts within it) together with extension papers with a lower ‘pass’ mark to allow good students to demonstrate their wider and/or deeper ability? My question is largely rhetorical, but a real barrier to rational change is the ridiculous application of quasi-fairness and comparability within many institutions. No, it is not necessary, in order for the system to be fair, that every assessment item has the same pass mark. We just have to keep challenging the central imposition of unnecessary and indeed damaging uniformity.
Modules, credits and progression: It is now commonplace that our engineering programmes are broken down into modules which are separately assessed and the passing of which attracts ‘credits’. The common currency is 120 credits per year of study giving 360 credits for a 3-year BEng programme and 480 for a 4-year MEng. The 120 UK credits map on to 60 European credits (ECTS), which are thus twice the size. Credits indicate the passing of the modules, not the level of achievement, which is separately totalled as a percentage. It is equally common to impose ‘progression’ rules based around the need to complete precisely 120 credits in a particular year before being allowed to enter the next year of study. It is also common that the possible credit values (i.e. size) of modules are constrained – perhaps they must be offered in multiples of 10 or 12 credits. A visitor from Mars would surely be puzzled by this very prescriptive approach. She would probably ask:
- Why do modules need to be the same size? Surely the whole programme has clear learning outcomes and achievement of these determines the degree outcome? What can it matter that the thermodynamics module is bigger or smaller than the introduction to control theory?;
- Why should a student be expected to collect exactly 120 credits before progression to a further year of study? Why does every year of study need to be identical? Surely the requirement for 360 or 480 credits can be met in a hundred different ways. Is not 105 + 125 + 130 as good as 120 + 120 + 120?;
- Why are we discussing ‘progression‘ at all? It is true that some topics have a pseudo-linear development, and therefore it may be necessary to impose pre-requisites for the study of some advanced modules, but this has little to do with ‘first year’ and ‘second year’.
I cannot justify these rules – versions of which exist at most UK universities – but I have found it difficult to persuade university administrators to agree to relax them. I suggest that you get yourself elected to your local Learning and Teaching Committee (at School, Faculty or University level) and keep asking why things are done in ways which are less than ideal for the students and the quality of their eventual degree.
Synoptic assessment: A frequent criticism of modular education systems is that they encourage students to compartmentalise their knowledge and they discourage the formation of useful connections between different topics within a discipline. There is a very easy way to address this problem, and to en-courage integrated or systems thinking. It is to set a ‘synoptic’ examination, that is a test which demands that the student draws knowledge from many topics which may have been taught in different modules, or may (horror) not have been explicitly taught at all. In some ways the final year project (whether ‘capstone’ or ‘research‘) should encourage a synoptic view but in my opinion this is insufficient and the graduate would be better served by also sitting a formal examination. Most degree programmes of 30 years ago contained a ‘general paper’ comprising questions on wide-ranging topics but the rise of the modular system and the model answer, combined with the view that all students should undergo the same experience (presuming against choice of questions), has rendered the synoptic paper almost extinct. Why don’t we re-establish it either as an oral exam (in a resource-rich world) or, more feasibly, as a general paper which carries credits but no teaching hours? The latter carries an apparent managerial bonus of ‘efficiency gains’ in that credits are awarded without the need for formal teaching.
Quality Assurance or Quality Enhancement processes: I have never heard an academic speak, even in jest, about wanting to lower the quality of the student experience. Anyone reading this book, and most of your friends, will want to do the best they can for their students, yet those funding universities feel the need to insist on a series of formal quality assurance measures. While quality enhancement (QE) is a more acceptable concept than quality assurance (QA) or – worse still – quality control (QC), I question the need for externally-imposed processes. It seems that these have been established (in the UK at least) in response to a number of perceived pressures. In essence these are:
- Audit – those who pay should know what is going on;
- Accountability – those who pay, whether government or student or both, should be able to ensure that they are getting value for money, and;
- Improvement – everyone involved should be trying continuously to improve the quality of higher education and thus graduates.
At first sight it is difficult to disagree with any of these motives. The problem is that externally imposed quality processes may not deliver the desired outcomes. The difficulties include:
- Measuring ‘quality‘ – we have no clear, measurable, objective criteria for the quality of a graduate, nor of the educational process which leads to graduation;
- Response time – changes in higher education take many years to effect, not just because the ‘committee cycle’ is rather slow, but because the typical length of an undergraduate programme is four years. Even if we could measure it, it would take more than ten years to begin to evaluate the effectiveness of a change. For example we started to develop a major change in the engineering curriculum at Liverpool in 2002. This involved changes in the style of teaching and in the spaces we use to teach. The first undergraduates entered the new programmes in 2008 and will graduate in 2012. In about 2014 it will start to be meaningful to ask them, and their employers, whether their degree programme was effective. So for 12 years we have had to operate on a hunch that we are making an improvement. This is about two or three times longer than the tenure of a government, a Head of School, a Vice-Chancellor or a typical educational initiative!;
- Cost – all mechanisms put in place to enhance quality have costs associated with them. These costs include directly identifiable elements (e.g. the running cost of the QAA or its successors) and indirect opportunity costs such as the time of the staff who are being audited or inspected. It has for many years struck me as ironic that every time my School was audited or inspected, I and other colleagues had to cancel or postpone teaching commitments. This does not appear to the students to be quality enhancement.
Do not imagine from the paragraphs above that I do not care about quality. I and most of my colleagues care passionately about the quality of our teaching and the students’ learning. I expect and hope that you do too. However the best way of ensuring and enhancing that quality is from within, with senior members of the university demanding that staff take teaching seriously. The only driver which matters is the student (and eventually graduate). In the long term any institution which does not pay attention to the quality of its teaching will suffer a loss of students and hence income, and this will threaten the viability of the institution. That’s the only driver a VC needs.
Accreditation: Many engineering programmes are accredited by one or more of the professional bodies on behalf of the Engineering Council (or ABET in the USA, CEAB in Canada or Engineers Australia in Australia). There are many positive aspects of this arrangement, including the knowledge which professional bodies gain about education and graduates, and strengthened interaction between academe and industry. It may also provide a School with external evidence for the excellence of its programmes, for use internally (for instance in budget discussions). However most of the criticisms of QE processes listed above also apply to accreditation. I am not convinced that the balance of advantage is in favour of either the student or the profession (or even society at large). There are many reasons for this view, but three of the principal are:
- The pace of change in future is likely to be faster than the time constant for accreditation;
- Accreditation has little effect on the quality of engineering practice, which can only be controlled by engineering contractors, supported or hindered by government regulation;
- New types of engineering (eg inter-disciplinary) are likely to emerge faster than accreditation can keep up.
Nevertheless I would encourage you to involve yourself in the accreditation activity of your professional body. It is important that practicing teachers are well represented in the accreditation process, which will otherwise be dominated by retired ex-practitioners.
External examining: The external examiner system is almost unique to the UK and is widely regarded as encouraging, if not guaranteeing, comparability of standards across the UK’s institutions. In the later decades of the twentieth century an external examiner was expected to act as an additional examiner, reading samples of work and either confirming marks or recommending changes. She would usually interview students whose marks placed them on the borderlines between degree classes, and her verdict would usually be accepted without question. (Not entirely surprisingly – this absolved staff from making difficult decisions for themselves!)
More recently the role of external examiner has come under scrutiny and has evolved quite considerably. Modern ideas of equality and fairness have led QE authorities to frown upon the interviewing of individual students, and certainly upon the hitherto normal practice of interviewing only selected students. The reliability of an external examiner adjudicating on marks based on a cursory exposure to the material has also been called into question. As a result the role of the external examiner has been more tightly defined. She usually reports directly to the Vice-Chancellor or Principal on the appropriateness of the learning outcomes, the level of the taught material and on the examination process. External examiners are now explicitly discouraged from commenting on the performance of individual students.
This new way of working maintains the traditional view of the external examiner as a ‘critical friend’ but implicitly the word ‘examiner’ now applies to the School and not to the students. An experienced examiner can still comment on, and influence, the standard of the degree award and its comparability across the sector but cannot be used as a referee or adjudicator.
I believe that this system still delivers benefits both to the School and to the examiner. It ensures that good practice in one institution is seen (and can be adopted) by staff from another. It allows senior staff in the host university to be alerted to falling standards or unfair practices. However it relies on a level of educational understanding and expertise in the examiner which probably makes a training/briefing session necessary. Some universities do this for all their external examiners; Others as yet do not. Either way external examining is a responsible and significant role, rarely rewarded with a fee commensurate with the effort required but worth doing because of what you, the examiner, learn. I would encourage you to get involved, initially just by mentioning your interest in acting as an external examiner around your profession network of colleagues in other universities.
Finally, I will list a few of the questions which it would be proper to ask while acting as an external examiner (several of which are touched on elsewhere in his book):
- How do you assess the learning outcomes given in this module specification?;
- Where do you assess deep learning?;
- Where and how do you assess creativity?;
- Why is the pass mark x%?;
- Why do you allow a choice of questions in written exams?;
- Do you scale marks and if so why?;
- How do you assess individual contributions to team work?;
- How do you eliminate the influence of the supervisor when assessing student project work?;
- Have you detected any plagiarism? What do you do about it?;
- What is the process and timescale by which my comments will be taken account of?;
- And of course: Have you read Peter Goodhew’s book?
A concluding paragraph
Having identified targets for change, how can we implement some of these? Ruth Graham has considered this in her report for the Royal Academy of Engineering (2012). You could start by reading this. My suggestion is evolutionary rather than revolutionary (perhaps I am not such a firebrand as I once was). The best way you can effect change is to get yourself into a position of influence. I’m sorry if this is distasteful to those of you who believe that your skills are most needed in contact with students, but from there you will change nothing of significance – merely the experience of a small number of students for a small part of their programme. In order to change things for a large number of students you have to change your university’s systems and attitudes. To do this you need, as rapidly as possible, to become Head of School, Dean of Faculty and Pro-Vice-Chancellor for Learning and Teaching (or whatever path to the top is appropriate in your institution). I learned this lesson too slowly and thus too late, so I can only offer my advice from the sidelines of retirement. You should aim to reach PVC or equivalent by the age of 50. Then you have 20 years to try to effect change. It is bound to be difficult so you might need the whole 20 years! Good luck.
End of Chapter 7 (please now add a comment)