Why university rankings are a false measures of success

Ben Lennon examines the criteria university rankings are based on and how this may not be a true measure of an institution’s success.

Every year when the three major international university league tables get published there is a misguided reaction. Since they started in 2003, year on year the media have made it a global news event and taken the results to be the definitive currency of the educational market. In October, the editor of the Times Higher Education (THE) rankings was quoted in The University Times as saying that Trinity’s decline on the previous year ‘should be cause for alarm’. However,  falling nineteen places in a league of four hundred that relies on dubious methodology should not lead to panic. Instead of focussing our attention on these tables, we should be treating them with disdain.

 

However, these rankings matter greatly as they exert unnecessary pressures on universities. They influence prospective students, employers, philanthropists and university staff. These tables are now often used by department heads seeking re-appointment or new positions, as an example of their success, but there are a number of ideological assumptions behind them that rarely get questioned. These are, firstly, that there are standards of ‘quality’ that can be objectively measured; that it is necessary and desirable to assess institutional quality according to externally defined performance indicators; that the punitive use of league tables will help to drive up educational performance and, finally, that assessment is a ‘neutral’ measuring instrument. The idea that there are league tables for universities around the world is a symbolic indicator of the influence of the values of the neo-liberal market being incorporated into third level education. It is also significant that these rankings are usually carried out by commercial bodies and not by universities themselves with control over the criteria.

 

It is hard to overstate just how blunt an instrument the methods used to quantify university success actually are. Universities exist to inform, inspire and engage students but this cannot be calculated. Instead the three main ranking bodies rely on proxies for this quality that ultimately fail. The various measurement tools used by each take into account things like article citation counts, faculty size, graduation rates and institutional reputation. They seem to emphasize research over teaching and none of them pay any attention to factors like social mobility or student and alumni opinions.

 

To understand how flawed these indicators are, one must take the example of institutional reputation. Some years ago a former chief justice of the Michigan Supreme Court sent a questionnaire to hundreds of fellow senior lawyers in the country to rank the top law schools in order of quality. Penn State’s law school averaged as the fifth most prestigious in the country despite the fact that, at the time, it didn’t have a law school. The respondents just saw a familiar name and assumed it to be good. Trying to use something as prejudice and vague as reputation to measure the value of education illustrates just how arbitrary these rankings are.

 

It is even quite simple to game the system. In 2012, the University of Limerick, despite not being ranked at all by the Shanghai or THE lists, was awarded a “5-star ratings across the areas of infrastructure, teaching, engagement and internationalization” by the QS system. This could potentially be down to the fact that UL paid QS a once-off audit fee of €14,000 that year and an annual fee of €5,000 to have their university ‘inspected’ in these areas. In the same year Alexandria University in Egypt was ranked 147th by THE when a third of the percentage of the weighting of their position was based on research output. This was largely down to one academic who published 320 articles in one year in a journal of which he was the editor. Even after this was revealed, their position was defended by the deputy editor of THE. The fact that paying any sort of fee to the ranking body or one person can make a university appear world class undermines the entire process.

 

In other attempts to enhance their position, many smaller institutions have grouped together in a bid to increase their academic output and copy the larger US and UK institutions that do so well. An example of this in Ireland is the newly formed alliance (and rumored future merger) between UCD and NCAD. One would assume that our national art college has benefited from the relative level of autonomy it had previously enjoyed but this may have to be sacrificed to some degree. There are many other examples of  institutions around the world going to great lengths to improve their score. In the US there have been cases of the misreporting of graduation rates and changing the numbers of class sizes so as to create the impression of a better student-to-teacher ratio, while actually reducing overall teaching hours to facilitate this. When such moves appear to be motivated by nothing other than increasing their position in the tables it becomes clear that these systems of measurement have become instruments of reform in themselves.

 

Trying to say which university is the best in the world is a bit like trying to do likewise with an album, there is no one right answer. The problem with doing this with universities, however, is how seriously people take them. What should be considered as a rough indicator in certain areas is now taken as the definitive measure of the worth of an institution. It’s easy and understandable that prospective second-level students in our risk adverse society take a look at a list online to compare the value of one university over another. The problem is most people do not look beyond the ranking and ask what is it based on. As a result these tables may already have become a self-fulfilling truth. The brightest students will apply to the supposed top universities and other institutions will increasingly bow to the pressure to conform to what the rankings demand even if it has a negative impact on the genuine education.

 

Albert Einstein is reported to have said “Not everything that counts can be counted, and not everything that can be counted counts.” This is certainly true of global university rankings. The idea of quality of education is too intangible to measure. Instead the groups that compile these lists use various alternatives that do not reflect what they claim to. The real worry, however, is when the process of measurement starts to effect what is being measured. Next time the league tables are published let’s save ourselves either the congratulations or the complaining and get on with what really matters.