Volume 9, Issue 9
A few weeks ago in the De Minimis article “Can’t Stop, Won’t Stop”, Will Mosseff decried the cost cutting behind the recent administrative changes at the law school. He said that, “it is the reputation of the institution that matters in attracting students. And reputation, in turn, is dependent on the quality of student services.”
It was pointed out in a comment, however, that “both LSE and Cambridge employ centralised student administration and service models. These universities rank higher in QS university rankings (which MLS relies on)…”
What is implied by the comment is correct: a university’s position in ranking systems has become synonymous with a university’s reputation (that is why marketing departments love them). And it is here that Will was in error. Reputation is tied up with rankings, but rankings have almost nothing to do with “the quality of student services”. If anything, the opposite is the case.
What do rankings actually measure? QS, cited by the commenter, ranks universities based on “surveys of 70,000 academics and graduate employers, alongside measures of the impact of research”. The Times Higher Education (THE) rankings, mentioned numerous times in the recent University of Melbourne “Growing Esteem 2014” discussion paper, uses “13 carefully calibrated performance indicators” grouped into five areas: Teaching, Research, Citations, Industry income, and International outlook.
But what does that mean?
It means that rankings do not measure the quality of student services. They do not measure student well-being. Nor do they measure the statutorily defined objects of the University of Melbourne – the serving of the community, the enrichment of cultural life, and the accommodation of free intellectual inquiry.
Instead, rankings measure the opinions of business, of elite journals, and of international academics. This was shown again recently when, after Queensland University of Technology was named among the best young universities in the world by the THE rankings, this was stated to be because of their “really good industry links”, their successful research, and their international relationships.
This has unfortunate distortionary consequences.
“Industry links” dramatically curtail free inquiry. A recent illustration is the CSIRO’s plan to stop "doing science for science sake" and to no longer do "public good" work unless it was linked to jobs and economic growth.
Cutting administration costs, thus making student services worse and negatively impacting student well-being, actually means climbing the rankings and gaining a better reputation. It does this by freeing up money for research targeted at top journals and for fostering those international relationships (see here & here for more).
Research is an expensive business, however, so cutting administration costs is not enough. The “ambitious goal for Melbourne” is to become a “billion dollar research enterprise by 2025” (here, p 30). The “significant additional income” necessary to achieve this goal cannot be achieved without “additional revenue through teaching” (ibid).
To increase teaching revenue the University of Melbourne needs to exploit students as much as possible in the form of high university fees. This is what is behind Glyn Davis’ push for deregulation and the concomitant massive increase in student debt. It is also what is behind Melbourne’s “targets” to “broaden the base of international enrolments and achieve a 50:50 split between undergraduate and graduate student enrolments”: international and postgraduate fees are already deregulated. Both targets, Melbourne says, have already been met (see here, p 13).
Warren Bebbington of Adelaide University has pointed out that rankings scarcely measure teaching or the campus experience at all. Indeed, “university rankings would have to be the worst consumer ratings in the retail market”.
Simon Marginson, formerly a professor at Melbourne, and who sits on the board of the THE rankings, has said the world would be a better place if rankings did not exist. "The link back to the real world”, he says, “is over-determined by indicator selection, weightings, poor survey returns and ignorant respondents, scaling decisions and surface fluctuation that is driven by small changes between almost equally ranked universities."
Phil Baty, editor of the THE rankings, has said rankings should come with “health warnings”.
Hamish Coates, research director at the Australian Council of Educational Research has stated that “University rankings are false and misleading… The correlation between having a Nobel prize winner on staff and the quality of first-year teaching is zero.”
So the next time the University of Melbourne climbs some bullshit rankings and the Vice-Chancellor writes to you, as he did in 2014, “to share the University's excitement at this news” and asks you to “take a moment to enjoy this recognition of excellence at the University of Melbourne”, bow your head and weep.
He’s asking you to delight in the fact he’s exploiting you for the purposes of a marketing gimmick.
The rest of this week's issue:
More De Minimis - articles like this: