Login

close

Login

If you are a registered HEi-know user, please log in to continue.


Unregistered Visitors

You must be a registered HEi-know user to access Briefing Reports, stories and other information and services. Please click on the link below to find out more about HEi-know.

Find out more
HEi News Roundup live

Live higher education news roundup

HEi-think: Reasons to be worried about the future of graduate employment

New figures suggest that more graduates are finding employment or going on to further study. But there are trends within the statistics that raise questions about the direction of the graduate labour market and that could cause concern for the future, warns Stephen Isherwood, Chief Executive of the Association of Graduate Recruiters .

HEi-think: HE needs practical tools to navigate turbulent times

As UK higher education enters a period of unprecedented change and uncertainty, Tony Strike, University Secretary and Director of Strategy at the University of Sheffield, says that more than ever before universities need reliable practical tools to guide them through the challenges they face.

UK universities lose ground in latest QS world rankings

Many UK universities have fallen further behind international competitors in the latest edition of the QS World University Rankings.

“Glacial” progress on closing the gender pay gap, report finds

Closing the higher education gender pay gap will take 40 years, a new report suggests.

HEi-think: How to reform the REF - the Stern challenge

Last month saw the passing of the deadline for responses to the Stern Review of the Research Excellence Framework. Professor Nick Talbot, Deputy Vice-Chancellor (Research and Impact) at the University of Exeter, gives his thoughts on the key issues and how the REF could be improved.

 

March 24 marked the closing of the consultation on the Stern Review which seeks to analyse arguments regarding the format, shape and scope of the Research Excellence Framework (REF).

Nicholas Stern is best known for his Review of the Economics of Climate Change published in 2006, a review that became extremely influential in provoking debate around the sustainable economics of climate change– indeed it altered the debate quite profoundly. Will his review of the REF have a similar effect?

The remit of the Stern Review of the REF is to determine whether the evaluation of research quality in UK Universities is being carried out in the most sensible manner, using the most appropriate tools and the clearest drivers.  It also asks whether the administrative burden on universities– estimated to have cost as much as £246 million for REF2014  – can be reduced. The consultation asks a series of questions regarding the mechanisms by which the REF could be improved and streamlined. The review comes on the back of the Wilsdon Metric Tide Review published in 2015, which made recommendations regarding the use of responsible metrics in the evaluation of research quality.

Why does the REF matter?

The REF, and its predecessor the Research Assessment Exercise (RAE), have become deeply ingrained in academia, stretching back to 1986. They have had a profound influence on both the shape and quality of the research base across the UK, driving research concentration into a small number of larger research-intensive universities, moving resource to departments excelling in particular disciplines, and fuelling a transfer market for the highest-performing academic staff. 

Although ostensibly a mechanism for HEFCE to distribute Quality Research (QR) funding (one half of the dual support mechanism for funding research) to universities, the REF has always had a number of other purposes. It ensures accountability for the use of public investment in research and bench-marks the quality of research in subject areas across the higher education sector.

But the REF/RAE has also been a driver of academic behaviour and management priorities.  Although much less acknowledged (and even controversial), the REF/RAE has been an external driver of performance management in universities, helping provide selective pressure on senior managers to evaluate academic staff performance much more closely than they might have otherwise.  Although management is often unpalatable as a concept in academia, it is hard to argue that this has been unsuccessful. UK academia after all produces a much higher proportion of the most highly cited research than expected, based on the size of its academic community (as noted in the Nurse Review) and inward investment in the research base has been exceptional in the last three decades. Like it or loathe it, the REF/RAE would appear from all the evidence to have driven up standards.

Most recently, the introduction of an evaluation of the broader impacts of research in REF2014 provided evidence of how academic research has informed public policy, contributed to societal good, and led to significant innovation and (the part that governments like the best) wealth creation. As a consequence of the REF, the generation of impact has now become much more recognised as an activity by academics– indeed it has spawned entirely new processes and support structures in most universities.

How could the REF be improved?  

How then to improve the REF while also making it much less onerous? The two biggest criticisms levelled at the REF are, first, that it is too expensive and burdensome, and second, that there is too much gaming of the system by universities.

I am not sure that either is completely warranted, even if one accepts the £246 million figure for REF2014 (which includes lots of assumptions of time engaged in internal evaluations and selection). This is still less than 1 per cent of the total £27 billion public research investment in the HE sector over the 6 year REF cycle.  The REF actually involves intense activity by only a small number of staff in each university. Most academics have little to do with its preparation, other than the research they would be doing anyway, while the evaluation itself involves a tiny (relatively speaking) group of hard-working panellists. The gaming undoubtedly goes on, but is largely about reputation and league tables.  It is quite hard to game the QR funding formula. You either have research strength in volume, or you don’t, and the funding table is the only one that actually matters in the end. 

Some big questions appear to me to be settled. Peer review, for all its faults, remains the gold standard for evaluation and, importantly, commands confidence across the sector.  Responsible metrics can help inform judgments, but can’t drive the process, and this seems broadly agreed. The REF also needs to be divided into subject-level evaluations for reasons of benchmarking, so this also seems unlikely to change.  Large reductions in the actual cost of running the REF process (£14 million), are therefore unlikely, and probably undesirable too, when there is more than £10.2 billion of QR to distribute in a defensible manner.

The main way, however, that the REF could certainly be improved, in my view, is in providing a more even evaluation, which limits the (inevitable) gaming that does go on by institutions. The most obvious way to achieve this would be to require all staff on academic contracts that include research to be required to submit publications to the REF.  If 100 per cent of all such staff in all institutions had to be returned, then we would really have a pretty fair reflection of research quality.  In such a system, the present requirement for up to 4 publications per academic could be retained, but with all staff having to submit their best work (even when they have published less than 4 articles).  This would limit the most onerous part of the REF preparatory process, which is selecting which publications, and therefore which staff, to return.  It would, of course, lead to some gaming by institutions in changing staff contracts (which they would deny), but it would be harder to do this in large numbers.

I think it is extremely unlikely, however, that the review will recommend mandatory 100 per cent REF returns. So many less research-intensive institutions would see their league table advantages in some subjects disappear, while larger research-intensives would have to admit to having much larger numbers of less productive researchers than they would like.  I doubt they will vote for that. A compromise might be an entry threshold, where you can only return in a subject if you include outputs from 80 per cent of your academic staff, for example.  This would be progress.

Breaking the individual performance link

There is, however, a lot of discussion in the sector about moving the attribution and ownership of the publications away from individual academics to the institution. This would break the often perceived link between individual performance of an academic and inclusion in the REF.  Although not an explicit link in many institutions (ours included), some universities have used REF selection as a key criterion for performance management.

One way to avoid the link, and also to limit the power of individuals in the academic transfer market, would be to allow an institution to showcase its very best publications within a particular subject area but not tied explicitly to individual academics.  Each unit of assessment (subject area) could be asked, for instance, to submit its very best publications at 4x the total number of staff – so no staff selection.  In an extreme case, the papers could all be authored by one academic, but this would be heavily penalised. The REF environment score would require an authorship dividend by which the intensity or wide spread nature of scholarship within a unit would have to be evaluated. So, if every academic in a unit contributed to the authorship of the selected papers, this would provide the maximum intensity score. The number of publications could also be altered for particular disciplines, for example, 2x or 3x for Humanities and Social Sciences, with double-weighting allowed for monographs, as now. An equality and diversity process to protect staff would, of course, be necessary (and was one of the very best developments of REF2014). In this model, publications would be claimed on the basis of the authorship address and not the individual.

Such a solution would allow universities to showcase their very best work, reduce the workload, but also encourage collaboration and collegiality, as all academics would have a vested interest in helping their colleagues to do well (imagine that).  It would also put the ownership of performance management back with the HEIs themselves and more explicitly separated from the REF. Worth considering I think.

Other possible innovations

There are a number of other innovations that could also, in my view, help improve the REF.  The current format of the REF could be simplified, the research environment statement could be populated (particularly in science disciplines) with the responsible metrics highlighted by the Wilsdon review, automatically generated by HESA.  Data could be collated and linked to the RCUK systems (such as the unpopular ResearchFish, or its offspring) in a more automated way. The portability of Impact could also be looked at, to ensure that when academics do move, both universities have an incentive to capture the impact activities of that member of staff.  Many really important impacts of academic research were lost completely to the sector in REF2014 due to staff movement (driven ironically by the REF transfer market).

The REF could also be made much more forward-looking by allowing QR calculations to be calculated on current staffing levels, rather than historic ones, even when the performance measurement are inevitably historic.  This would help to encourage growth and vibrancy, and allow institutions to plan more easily. A lot of relatively small innovations, would reduce the overall workload in REF preparation, without impacting on its resolution or accuracy.

Stern has a difficult challenge ahead in terms of navigating the many disparate views it is likely to get from the sector.  We are, after all, very diverse organisations and rather than embracing and celebrating this diversity, we all like to compete in a ‘one size-fits-all’ REF process and then boast about our results, spinning for all its worth (which we all deny of course).

The final thing that is perhaps worth saying is that the Stern consultation, like so many other recent reports, is quite insular and inward-facing.  It feels as if the UK is all alone in grappling with how to evaluate research quality. One final hope I have, therefore, is that an international comparative analysis of other evaluation procedures and funding mechanisms could form part of the review.  While the UK has wholeheartedly adopted an institutionalised national research evaluation, other countries have rejected such a model, and we should at least consider some really radical alternatives from other vibrant academic systems, or more revolutionary innovations to the REF than might come from the consultation.  

 

Professor Nick Talbot
Back