This is one of the hardest, and most frequently-asked, questions. There are some straightforward, but not very convincing, answers and some more honest responses. I will mention both.
Simplistically you could look at the assessed behaviour of your students – at their exam results and/or their other assessment performance. You might feel encouraged if the average grades of the cohort improve year by year. However you should also ask yourself some hard questions such as ‘did this cohort actually have the same innate ability as previous cohorts?’ and ‘were my assessments or exams equally challenging this year as in previous years?’ or ‘did anything else change for these students, for example what they learned in some of my colleagues’ modules?’. I have never been able to answer these questions for myself with absolute certainty, and I doubt that you can.
Alternatively, or additionally, you could consider other – more qualitative – indicators. Did you get better questions from the students during and after lectures this year? Are your colleagues commenting more favourably about some or all of these students, or the skills and knowledge they bring from your class to other classes?
Thirdly, you might consider the student responses to your mid-module feedback questions or end-of-module questionnaire. In constructing these, and then considering the responses, you need to distinguish clearly among: did they like me and my presentation?; did they like the topic?, and did they learn what I wanted them to learn? Only the third of these is really important! (Although the other two might help you achieve the third.) Remember that end-of-module questionnaires are often referred to as ‘happy sheets’ and I overheard a prominent engineering educator comment ‘questionnaires measure charisma, not education’. You could improve this situation by asking ‘how could this module be improved?’ and making sure that you feed back the results to the students. You might, as a programme director or Head of School, consider whether student questionnaire results should be publicly available.
Those were the conventional ‘easy’ answers. The less palatable but more honest answer is that you really cannot measure your success in changing your teaching methods, at least until a very long time has elapsed. You might get positive reinforcement ten years later from a graduate who tells you she always remembers your explanation of widget design and it helped her at work recently. However if you get more than one or two of these a year, you are doing very well indeed, so it is not a rapid feedback mechanism!
To be more encouraging there are a number of indirect ways of reassuring yourself that you are doing a good job. Firstly – trust your own judgement. You made changes because you were convinced a while ago that this was a good idea. It probably still is. Secondly, trust your colleagues across the world: You will probably have colleagues within your School who encourage you and share experiences, but there are also networks such as the HEA Subject Centres (in the UK) or the Australian Office for Learning and Teaching (formerly the Carrick Institute) or the CDIO network internationally [www.cdio.org]. Fundamentally you have to know in your gut that you are making changes to improve the learning of your students. If you don’t buy into this there is nothing I can write which will change your mind!
The HEA Subject Centres no longer exist, so networking among teachers within a discipline such as engineering is harder now than it used to be.
In summary ask yourself ‘will doing it this way produce better engineers (or scientists)?’