The Unfairness of Measuring Teaching Performance

I write this in response to an article in the Brisbane Times online today: “‘This woman is so old’: Insults hurled at academics spur survey rethink” by Henrietta Cook. This comment posted in anonymous student feedback to Sydney academic Dr. Teena Clerke. These surveys are used by universities to measure the quality of teaching in its programs.

There is no question that universities need to maintain quality teaching but there is a problem with teachers being subjected to abuse under any guise as pointed out in the cited article above. What’s more, such measures are increasingly being used to judge not only the quality of university teaching programs but also the performance of teachers and to help decide questions of whether a given academic should be re-hired, promoted or fired.

While most institutions try to take a balanced view of survey data, in regard to staff management, it potentially opens a pandora’s box of for abusive behavior, gender and racial discrimination, bullying and sexual harassment to be perpetrated by pernicious managers and supervisors (or even students against teachers). So we need assurances that the benefits of such schemes outweigh the potential risks for abuse, however isolated and infrequent such instances might be.

The recent book “The Tyranny of Metrics” by academic Jerry Muller (2018) handles these issues in a more comprehensive manner than I can do here. What I have seen over my 28 years in academia is that teaching evaluation started out as a survey consisting of 10, or so, questions plus room for comments. They were handled by teachers on a class-by-class basis and returned in a sealed envelope to the university by an appointed student.

Typically, the academic could select one or more of the survey questions from a suite of optional questions, in addition, to standardized questions. I illustrate this with my own SET (student evaluation of teaching) results from October 1998 and the Insight evaluation report from June 2015, from the same institution and from the same unit of teaching, Instrumental Analysis:

The evaluation instrument on the LHS above is for October 1998 and on the RHS for June 2015. Click on the thumbnail to enlarge the image.

In 2015 the survey consisted of three questions and was completed online, in this case only 4 students responded. In 2015 the survey was administered in two parts, one earlier in the semester (not shown here) and one late in the semester (as shown). This did allow for useful comparisons over the teaching semester. The 2015 Insight survey was for the teaching unit (rather than a personal teaching instrument). However, I was the Unit Coordinator for the teaching unit evaluated, thus my performance would be strongly reflected in the survey results.

Over the 17 years, the survey has gone from 10 questions with comments in 1998, administered in the lecture room, where most of the students present answered the survey (about 30 in this case). A nominated student collected the completed surveys and mailed them in a sealed envelop to the university. The teacher (in this case myself) was not present to ensure anonymity – the nominated student signed a form to attest to this.

However, in 2015 the 3 questions were standardized across all teaching units in the university. In my institution, there were plans to include teacher-selected questions as well as the 3 standardized ones, as of the next semester. Furthermore, in 2015, it was only one, the third of the 3 question responses, was strongly emphasized at staff meetings and annual performance reviews.

In conclusion, reviewing my teaching surveys over 17 years has shown that the use of student-based surveys has evolved from being a teacher-focused instrument to help teachers with their professional development to an institution-centered, one-size-fits-all standard, impersonal, metric. Getting back to the original article about inappropriate comments being offered about age being presented to the academic – its the impersonalness of the survey as well as the fact that it is exclusively university administered that is the heart of the problem.

If Teena was able to use survey items that were more matched to her personal characteristics as a teacher – such as an experience, depth of knowledge and the care and skill in helping students learn – then I’m sure that the anonymous comments would have been much more beneficial (and much fairer) to her. As I’ve pointed out more personally-tailored survey questions were available to me in 1998 but not in 2015.

This raises an even more pertinent question: why after 17 years, with all the intervening developments in information technology, do we have less fairness in evaluating the quality of teaching in the university system in 2018 than we had in 1998?

Leave a Reply

Discover more from The Dossier

Subscribe now to keep reading and get access to the full archive.

Continue reading