If you are a registered HEi-know user, please log in to continue.
You must be a registered HEi-know user to access Briefing Reports, stories and other information and services. Please click on the link below to find out more about HEi-know.
A new online course - the first of its kind - has been developed by the British Council to help international students prepare to study at a university in the UK.
Val Yates, Director of Access and Inclusion at the University of Worcester, raises the curtain on an annual access and continuing education event, now in its 25 th year, taking place at her institution this week.
Universities that are rated highly on Facebook and other online review sites tend to do better in formal measures of learning and teaching, new research suggests.
Key findings of the latest Student Academic Experience Survey from Advance HE and the Higher Education Policy Institute are outlined and examined by Jonathan Neves, Advance HE Head of Insights and author of a report on the survey results.
Half of the institutions that reapplied to the Teaching Excellence and Student Outcomes Framework in 2018 saw their rating improve.
Professor Ellen Hazelkorn, Policy Advisor to the Higher Education Authority (Ireland) and Emeritus Professor and Director at the Higher Education Policy Research Unit (HEPRU), Dublin Institute of Technology, responds to a new highly critical analysis of global university league tables from Bahram Bekhradnia, President of the Higher Education Policy Institute.
As Bahram Bekhradnia says at the outset of his analysis: rankings “have not just captured the imagination of higher education but in some ways have captured higher education itself.”
There is strong evidence, internationally over the last ten years, that global rankings have become an influential and impactful driver of higher education decision-making and academic behaviour, at both the institutional and national level.
Rankings are used to highlight ambition and set explicit strategic goals and priorities; to measure performance and reward success; and to allocate resources. Students, especially high achievers and international students, use rankings to inform choice. Universities use rankings to identify potential partners and membership of international networks. Employers and other stakeholders use rankings for recruitment or publicity purposes. And, government policy around the world is increasingly influenced by rankings.
While rankings generate lots of interest, do they tell us anything meaningful about higher education performance and quality?
First, it’s important to remember rankings are a commercial product – except for U-Multirank developed by the European Commission. Times Higher Education (THE) is still considered an independent newspaper, but it is often difficult to distinguish between its journalist role and being a marketing agent for its successful THE World University Rankings. QS is first and foremost a commercial company, producing QS World University Rankings, and a wide array of other products, including consultancy. Even the Academic Ranking of World Universities (ARWU), from Shanghai Jiao Tong University, has been spun-off to form the ShanghaiConsulting Company.
In recent years, rankings have expanded their global reach. In addition to myriad regional rankings, THE has joined forces with the Wall Street Journal to develop a US version while US News and World Report has reciprocally expanded internationally. Linked-In has developed an employability ranking, and QS is developing Global Workplace as a recruitment arm. Both THE and Shanghai, along with Thomson Reuters, are heavily involved in monetising higher education data.
Second, rankings essentially measure wealth – whether garnered because of institutional age, endowments, tuition or government investment. The top-ranked universities are a good example of this phenomenon with budgets beyond the level of many countries. Top-ranked UK and US universities differ, of course, in terms of their funding source. In the US, income is primarily private in terms of student fees which can reach $40,000 (£31,800) annually, supplemented by the universities’ own endowments. In the UK, universities are more reliant on public funds, but even those with endowments cannot hope to match the incredible sums across the Atlantic. Harvard's endowment reached $36bn (£28.6bn) for 2015, followed by Yale at over $25bn (£19.9bn). By comparison, Cambridge had £5.8bn and Oxford £4.2bn, with Edinburgh, being the next largest, with an order of magnitude smaller at £317m. And this gap in wealth is widening – as nations, and their publics, around the world are affected in different ways by global economic and political crises. Gauging performance through the lens of the top-ranked universities skews our understanding of educational quality, and creates perverse benchmarks.
Thirdly, the choice of indicators – the data sources and the formatting of the results – create their own perversities, as Bekhradnia so ably discusses. Rankings focus disproportionately on research and reputation for two reasons. There is a dearth of reliable international comparative data. Hence, rankings rely on bibliometric sources which discount the arts, humanities and social sciences. But, rankings also consider research and reputation as the defining attributes of higher education, and determinants of quality. Accordingly, ARWU devotes 100 per cent to measuring research and research-related activities while THE devotes 90 per cent and QS 70 per cent. The latter, respectively, assign 33 per cent and 50 per cent to reputational surveys, which are appropriately criticised in the HEPI publication.
There are also important questions about specific indicators. All the evidence suggests that the quality of teaching is far more important for student achievement than the staff/student ratio. Many top professors may never teach, or they may be terrible teachers or have little or no interest in their students, or students may be disengaged. THE uses a reputational survey, but, it is unclear how anyone can genuinely rate someone’s teaching ability without being in the classroom. Internationalisation incentivises quantity over quality, and reflects a country’s geographic position. Take Switzerland as an example.
On the other hand, rankings fail to give any cognizance to the quality of the student experience, regional or civic engagement, the value and impact of research beyond the academy. These are all potentially meaningful but more complex to measure using quantitative indicators.
Thus, by prioritising criteria which benefit universities which recruit elite, high-achieving students, rankings ignore the overwhelming majority of students. The top-100 universities represent only 0.5 per cent of total higher education institutions worldwide, and only 0.4 per cent of students!
International university rankings: For good or ill? is recommended reading to everyone involved and interested in higher education – to remind ourselves, again, of the harmful effect of rankings. This is necessary because, despite years of valuable analysis, governments and universities, around the world, continue to reference and use them to drive policy and priorities. One of the big lessons of global rankings is the extent to which higher education (policy) has become vulnerable to an agenda set by others.
Professor Ellen Hazelkorn is Policy Advisor to the Higher Education Authority (Ireland) and Emeritus Professor and Director, Higher Education Policy Research Unit (HEPRU), Dublin Institute of Technology. She is also International Co-Investigator, Centre for Global Higher Education, UCL Institute of Education, London. She is author: Rankings and the Reshaping of Higher Education. The Battle for World-Class Excellence, 2nd edition (Palgrave, 2015); and Global Rankings and the Geopolitics of Higher Education (Routledge, 2016).
© 2013 Media FHE, all rights reserved