LSE - Small Logo
LSE - Small Logo

Blog Admin

May 14th, 2013

The apparatus of research assessment is driven by the academic publishing industry and has become entirely self-serving

17 comments | 1 shares

Estimated reading time: 5 minutes

Blog Admin

May 14th, 2013

The apparatus of research assessment is driven by the academic publishing industry and has become entirely self-serving

17 comments | 1 shares

Estimated reading time: 5 minutes

petercoles

Peer review may be favoured as the best measure of scientific assessment ahead of the REF, but can it be properly implemented? Peter Coles does the maths on what the Physics panel face and finds there simply won’t be enough time to do what the REF administrators claim. Rather, closed-access bibliometrics will have to be substituted at the expense of legitimate assessment of outputs. 

What I want to do first of all is to draw attention to a very nice blog post by a certain Professor Moriarty who, in case you did not realise it, dragged himself away from his hiding place beneath the Reichenbach Falls and started a new life as Professor of Physics at Nottingham University.  Phil Moriarty’s piece basically argues that the only way to really judge the quality of a scientific publication is not by looking at where it is published, but by peer review (i.e. by getting knowledgeable people to read it). This isn’t a controversial point of view, but it does run counter to the current mania for dubious bibliometric indicators, such as journal impact factors and citation counts.

The forthcoming Research Excellence Framework involves an assessment of the research that has been carried out in UK universities over the past five years or so, and a major part of the REF will be the assessment of up to four “outputs” submitted by research-active members of staff over the relevant period (from 2008 to 2013). Reading Phil’s piece might persuade you to be happy that the assessment of the research outputs involved in the REF will be primarily based on peer review. If you are then I suggest you read on because, as I have blogged about before, although peer review is fine in principle, the way that it will be implemented as part of the REF has me deeply worried.

The first problem arises from the scale of the task facing members of the panel undertaking this assessment. Each research active member of staff is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The panel comprises 20 members.

As a rough guess let’s assume that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty  close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There is some uncertainty in these figures because (a) there is plenty of evidence that departments are going to be more selective in who is entered than was the case in 2008 and (b) some departments have increased their staff numbers significantly since 2008. These two factors work in opposite directions so not knowing the size of either it seems sensible to go with the numbers from the previous round for the purposes of my argument.

There are 20 members of the panel so 6400 papers submitted means that, between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day…

It is therefore blindingly obvious that whatever the panel does do will not be a thorough peer review of each paper, equivalent to refereeing it for publication in a journal. The panel members simply won’t have the time to do what the REF administrators claim they will do. We will be lucky if they manage a quick skim of each paper before moving on. In other words, it’s a sham.

Now we are also told the panel will use their expert judgment to decide which outputs belong to the following categories:

  • 4*  World Leading
  • 3* Internationally Excellent
  • 2* Internationally Recognized
  • 1* Nationally Recognized
  • U   Unclassified

There is an expectation that the so-called QR  funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. The word on the street is that the weighting for 4* will be 9 and that for 3* only 1. “Internationally recognized”  will be regarded as worthless in the view of HEFCE. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?

The panel members will apparently know enough about every single one of the papers they are going to read in order to place them  into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not. The steep increase in weighting between 3* and 4* means that this judgment could mean a drop of funding that could spell closure for a department.

We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals.  No doubt Elsevier are on a nice little earner peddling meaningless data for the HECFE bean-counters, but I have no confidence that they will add any value to the assessment process.

There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs”  are published, including a pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.

I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.

The involvement of  a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.

Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used;  we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors?  All the documentation is shredded after the results are published. Who can trust such a system?

To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become  entirely self-serving. It is imposing increasingly ridiculous administrative burdens on researchers, inventing increasingly arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.

And that’s all just about “outputs”. I haven’t even started on “impact”….

This was originally posted on Peter Coles’ personal blog and is reposted with permission.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics.  

About the Author

Peter Coles is Professor of Theoretical Astrophysics and Head of the School of Mathematical and Physical Sciences at the University of Sussex. His research is in the area of cosmology and the large-scale structure of the Universe.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: REF2014

17 Comments

This work by LSE Impact of Social Sciences blog is licensed under a Creative Commons Attribution 3.0 Unported.