Outdated journal rankings and the ERA exercise

IMG_5525

by

Professor Rick Sarre, School of Law, University of South Australia, and President, Australian and New Zealand Society of Criminology (ANZSOC)

Professor Kerry Carrington, Head of School of Justice, Faculty of Law, Queensland University of Technology.

Professor Reece Walters, Assistant Dean of Research, Faculty of Law, Queensland University of Technology.

Forthcoming PacifiCrim, ANZSOC Newsletter, May 2015

As the results of the latest Excellence in Research Australia (ERA) exercise come closer to being announced, universities around Australia are holding their collective breaths.  The ERA claims to be an assessment of research strengths and quality at Australian universities. While it is not supposed to produce a set of league tables, ultimately that is what tends to happen.

Almost a decade ago, policymakers began the search for credible research performance indicators.  Bibliographic metrics tables were born.  In 2009 the Australian Research Council (ARC) published a set of journal rankings based on advice and feedback from various academic and professional associations.  Journals were ranked A*, A, B or C. The rankings were based on an academic assessment of journals published from 2001 to 2006. The exercise did not last long. Two years later, the rankings were discarded by then Minister Kim Carr for two reasons: first, because it became apparent that evaluation committees were tending to rely upon their own knowledge, and second, because the rankings were deemed to have become outdated.  Moreover, the Minister said there was evidence the rankings were being ‘deployed inappropriately within some quarters of the sector’ and ‘in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings. One common example was the setting of targets for publication in A and A* journals by institutional research managers.’ (Carr, K quoted in Mazzarol and Soutar)

The news was welcomed by the Australian Academy of Science Secretary for Science Policy, the Academy of Social Sciences, the Australian Academy of the Humanities, and Margaret Shiel, the then CEO of the ARC. Indeed, journal rankings were not used in the 2012 ERA exercise. Instructions to applicants and reviewers for ARC grants consistently state that one should not use these rankings as measures of quality. Finally, the National Tertiary Education Union (NTEU) is concerned about the misuse of any ERA journal ranking in performance management, and state unequivocally ‘that its continued use as a measure of research performance or in any other context is illegitimate.’

Notwithstanding all of the above, these out-dated lists continue to enjoy the favour of many Australian university managers. This gives rise to some unfortunate consequences. For example, researchers are discouraged from publishing in new and innovative journals that were ranked less than an A in 2006. Schools are now unwilling to begin new publication ventures because new journals will remain unranked for a not insignificant period of time.

There is another worrying aspect to this as well; one that has global consequences. It is no accident that journal citations and ranking measures place journals from US and UK (and sometimes Europe) at the top of lists, with one apparent measure being sheer longevity. Newcomers from the global south, such as Australia and Latin America, have, in the last twenty years especially, used open access, clever marketing and innovation to challenge the dominance of the big players. Traditional (global north) journal publishers are likely to attempt to manipulate the ranking lists to counter these trends.  The Scopus Journal Ranking system includes only 7 journals from Australia in criminology and law, which all rank in the lower quartiles. ANZ Journal of Criminology, Current Issues in Criminal Justice and International Journal for Crime, Justice and Social Democracy, do not appear in the system. Australian journals cannot compete on a level playing field with the journals from the densely populated northern hemisphere. Yet it is important that we support our own journals from the global south.

Why do managers in Australian universities persist in using outdated journal rankings in arranging and assessing their submissions to ERA 2015 or, indeed, for anything else?  These rankings are officially dead, so why have they not been buried? The answers are not immediately clear. But we do know that, until the relevant funerals are held, younger tertiary institutions, new journals, newer disciplines and early career researchers will continue to be seriously disadvantaged.

Professor Rick Sarre, School of Law, University of South Australia, and President, Australian and New Zealand Society of Criminology (ANZSOC)

Professor Kerry Carrington, Head of School of Justice, Faculty of Law, Queensland University of Technology.

Professor Reece Walters, Assistant Dean of Research, Faculty of Law, Queensland University of Technology.

 

2 responses

Post a comment
  1. Michael Crisp

    This is a timely article and it poses a good question.

    I think that often these new measures are developed in the belief that they will simplify the evaluation of research and therefore save time and money. However, we often forget that research is complex – it is not easy to do research and it happens at the boundaries of what we already know – so the evaluation of research is therefore also going to be complex. This is one reason why we have the peer review process – which is probably still one of the best ways to evaluate research (although it comes with its own issues).

    There is an assumption that we need to simplify research evaluation, streamline it, reduce the cost of it, for example by removing academics from the evaluation process (to free up their time to do more research presumably). But it won’t work completely because research is not easy – so the evaluation of it is unlikely to be easy. It is important though.

    I think that it is still important for us to be able to demonstrate and test the quality of our research and maybe it is just something that has to take time and money and academic input. Maybe research quality can’t be condensed down into a single number (or letter) based on proxy measures that an administrator or public servant can produce without engaging with the research itself.

    The other problem is the proliferation of journal rankings. Just like university rankings, new journal rankings are popping up all the time. I have made a little collection of some here if people are interested or want to add some more: http://www.researchimpact.com.au/viewforum.php?f=20

  2. Kerry Carrington

    Hi Michael,
    Thanks for your comment. I agree assessing research quality is timely and requires academic expertise which is why resorting to the old measures based on journal rankings from 2001-06 is so problematic. Thanks also for putting together a list of other ranking measures. Kerry