Guest article by Jonas Holmqvist.

As we’ve seen last month (read here), there is some legitimate criticism of journal rankings, and of the practice of basing decisions about promotion, recruitment etc. on publications in well-ranked journals. So how should we relate to this situation? 

You notice I’ve shifted the argument from discussing journal rankings per se (read here) to discussing how the rankings are used. The reason is quite simple: if rankings did not impact our evaluations, promotions, etc., then few people would care much about them. Given that they do, and that some say that this is a bad practice, let’s look briefly at the options. Four different counterstrategies are proposed instead, all of them – I dare say – much worse.

Get rid of all rankings

I see little merit or even logic in this argument. First, it seems to be a ‘sour grapes’ argument with nothing proposed instead. Second, it ignores what we already saw: most researchers don’t resort to rankings for our own field, we know which journals are good. I mentioned the top eight; if we scrapped all rankings, it wouldn’t change a thing, those eight journals would continue to be held in high esteem. In fact, this argument is likely counterproductive as getting rid of rankings would be worse for those mid-journals we are less sure about but which can be rather good; rankings inform us of them, while no rankings are needed to know top journals. Third: rankings can be very helpful for young researchers. In some fields without rankings, I regularly hear of distraught PhD students who’ve submitted to predatory journals only to discover both that they now need to pay and that their publication counts for nothing. I believe journal rankings offer some insurance to fresh researchers. If a journal is decently ranked by AJG, for example, then it’s most likely a serious journal.

Just count publications, regardless of outlet

This is – in my opinion – the least convincing of all counterproposals, closely related to getting rid of rankings, albeit a bit different in that recommends to ignore rankings. This brings us into the often very murky waters outside the rankings. If a journal is not ranked by FT, it can still be a very good journal. If a journal is not ranked by AJG, and has no impact factor… These are red flags, or at least orange flags. Sure, it could be a very serious journal recently launched – no journal is well ranked right from the start. Having said that, I do hold that many journals outside the rankings are not as thorough in their peer review process as journals in the rankings. 

Ironically, those I’ve heard advocate the approach of just counting publications regardless of outlet often criticize the use of journal rankings for being “a numbers’ game”. I’d counter that argument by saying that it’s nothing of the kind: prioritizing a few thorough research pieces in good outlets over a number of quick articles in unranked journals is about quality, not quantity. Just counting publications regardless of quality would indeed be a numbers’ game. The current system of using journal rankings is the very opposite of a numbers’ game precisely because it encourages us to aim for good research able to make it into good journals.

The ‘open peer review instead’

Strictly speaking, this argument is more about how articles are accepted into journals, but it is closely connected to how some criticize both journals and journals rankings. This option suggests that reviewers should know who the authors are, and vice versa. In my view, this brings us back to the ‘misunderstood genius’. Let’s be honest: an influential senior Professor could exert quite some pressure – direct and indirect – at academic colleagues to evaluate the Professor’s publications positively. Even if no such pressure were applied, many reviewers might still be hesitant to reject, well aware that their identities are known. So this is a “fix” that would improve nothing but instead make the situation worse. By getting rid oft the blind review process, we’d move to a system where it really would matter who you are in order to get published. I believe this would be much worse than what we have today.

The ‘open access’

If the ‘open peer review’ invites blatant favoritism, the Open access model does have some positive aspects at first sight. I think most researchers agree that Open access is good and that research should be available to all. On a side note, I fear proponents might be overestimating the public’s demand – most academics do have access to research through journal databases, and something tells me it’s not the lack of open access that keeps marketing journals from being as widely read as the sport pages… Still, the idea of research being available is laudable. 

However… basing annual evaluations and promotions on open access publications would seem to invite some serious ethical issues, issues that might be more problematic than what they are supposed to solve. We all know the way to open access – you pay for it. This means that researchers high up in their university system might get funding for it, while younger researchers are unlikely to get that. Similarly, researchers with sufficiently high salaries or good personal finances could pay for open access and wait for the ‘rewards’ in terms of evaluations and promotions in a few years, effectively perpetuating barriers between the haves and the have-nots. So while the open access model for evaluations and promotions may not have the strong personal bias of the ‘peer review evaluations’, it would seem to come with an important and problematic systematic bias.

To conclude this overview, we have seen that there are advantages and certain disadvantages to how journal rankings are compiled and used – yet I would argue the net balance is still positive. While it’s important to keep in mind the different problems than can exist, journal rankings can still be very useful for young researchers not yet familiar with the different journals in their own field, and for all of us when we need to assess the quality of journals in other fields. I’ve tried to counter the four most common alternatives to using journal rankings that I’ve come across, and I’ve explained why I think each of them would only increase the problem. If I may end by paraphrasing Churchill “Relying on journal rankings is the worst system – apart from the alternatives”.


Jonas Holmqvist
Associate Professor of Marketing
Kedge Business School








Image credit: Marianne Bos

Comments

comments