Guest article by Jonas Holmqvist.

Last month (read here) we looked at some of the most widely used journal rankings, particularly the impact factors, the AJG, and the Financial Times list. This second part focuses on a critical discussion of both upsides and downsides of using journal rankings. It goes without saying, then, that this will be a more subjective piece as it’s more concerned with opinions about journal rankings rather than explaining what they are. I believe such a discussion is relevant whenever we discuss journal rankings, precisely because they are so influential for academic careers.

Journal rankings – love them or hate them?

Some academics might say “We should not care about journal rankings”. That’s easier said than done, for reasons we saw last month. For better or worse, journal rankings play a huge part in deciding aspects such as recruitments, promotions, salaries, work loads, and bonuses. In many business schools and universities, they decide all of those aspects, meaning our careers are very much guided by journal rankings. There are few academics who can avoid journal rankings, regardless of how we feel about them.

My own view on journal rankings tend to be somewhat of a middle ground, and I believe this to be a common position; I’d still like to start by identifying two polar opposite views. In my experience, whenever talk turns to journal rankings there are two extreme opposites we need to understand; it’s only a slight exaggeration to say these two opposites are the only ones with an unambiguous opinion on journal rankings.

The “administrator”  

For administrators, journal rankings are a blessing as they tend to make the administrator’s life (or at least their job) easier. With literally thousands of academic journals, nobody can know the quality of each and everyone. As service researchers, we tend to have a good knowledge of service journals, and we also tend to know all top marketing journals. For example, a few weeks back I sat in a discussion where talk turned to the “big eight” journals of marketing, and we were all clear on which journals were meant. [1] I dare stick my chin out and guess few marketing researchers would dispute that, showing that we tend to have a ‘shared canon’ of top journals in our mind. Usually we also have a sound knowledge of most middle-ranged marketing journals, and a few top journals outside our own field. 

However… venturing outside services and marketing, my own knowledge decreases fast. Of course I know that Journal of Finance, Journal of International Business Studies, or Academy of Management Review are outstanding journals in their respective fields. However, I know virtually nothing about finance or accounting journals below the top journals. With so many journals, knowing both the top journals and the mid-range journals in every field is quite a challenge. 

This explains why administrators tend to love journal rankings: administrators often need to evaluate research output from researchers in many different fields, and can hardly be expected to know intuitively the quality of each journal; being able to resort to journal rankings simplifies their job enormously. The rankings provide a clear and simple tool with which to evaluate research output. (Disclaimer: I am of course not claiming that each administrator really corresponds to the ‘administrator’ in my example.)

The “misunderstood genius”

The opposite extreme to the administration is the ‘misunderstood genius’ who despises journal rankings. Moreover, the misunderstood genius never tires of telling anyone how both journal rankings and the entire publication system are corrupt and flawed. Their prime evidence, of course, is that their own outstanding, earth-shattering, groundbreaking research regularly gets rejected. The misunderstood genius puts this down to favoritism in academia, and thinks journal rankings are part of this great cabal. A few years back, some French ‘misunderstood geniuses’ published an angry opinion piece in a major French newspaper in which they railed at the ‘unfair’ system that benefits ‘hired foreigners’ (that’s my colleagues and I, hello) chasing publications in “Anglo-Saxon journals” [2] rather than good, proper French intellectuals who publish important books in French instead. 

The misunderstood genius also frequently turns up at conferences to attend Meet-the-editors sessions to complain loudly that their latest, magnificent manuscript was rejected – despite being better than everything the journal published… In short, the misunderstood genius is as convinced about their own greatness as they are about the widespread corruption, even conspiracy, in the entire publication system. If I can offer one piece of advice: Never become the ‘misunderstood genius’.

The middle-ground

After this overview of the two opposite positions, let’s move towards the middle ground where I believe most of us can be found. The very broad spectrum I call the middle ground encompasses those who readily acknowledge problems and shortcomings in relying on journal rankings, but also find them beneficial in certain ways. Before outlining why I believe there are certain important benefits, let’s look at the shortcomings.

Problems with (over)-reliance on journal rankings

1. The ranking of a journal should not be confused with the quality of an article
This is perhaps the most important point of all. All journal rankings present the average of that journal over at least a year, usually longer; it does not look at individual articles. Quite frequently, we see important research published in journals outside the top journals. Looking at service research, I think we all know that some seminal and influential service articles have been published in journals that fall into the mid-range of good journals, but not top ranked (i.e. not 4* or 4 in AJG, not included on the FT list). I dare say it is not uncommon to see very good research that goes on to have a large impact being published in journals outside the ‘big eight’ I mentioned above. I don’t know if you’ve seen the film ‘Gattaca’ (1997), which presents a bleak future society in which we are all judged based on computed predictions of what we can achieve rather than on what we actually do achieve. If we started to think that every article in a top journal is automatically great and every article outside a top journal is less good, then we’ve fallen prey to the Gattaca-syndrome. So if you only remember one thing from this piece, please remember this: journal rankings tell us something about the quality of the average journal article over a longer period; it does not tell us the quality of each article published in the journal.

2. No journal ranking is completely immune to some manipulation
I already discussed last month that many journal rankings are decided by rather small committees. For the most respected rankings, such as AJG and FT, I believe the committees to be both serious and impartial in their evaluations. Even so, and as already discussed, of course there will be journals that the committee members know better. Even for a ranking not decided directly by committees, such as the annual Journal Citation Report of impact factors, it is certainly possible to exert some influence on them, for example by what type of articles are prioritized. Spoiler alert: articles on research methodology tend to boost impact factors, as do articles laying out research priorities. These are two types of articles that many others will cite, to justify methodological choices and/or research approaches. There is of course nothing wrong in this; quite the contrary, these articles tend to be helpful, and it makes sense to publish them. A somewhat less acceptable practice is to ask submitting authors to cite more previous articles from the journal. In short: I do believe all leading journal rankings to be good, broadly speaking, but there’s no denying that no ranking method is immune to any form of influence.

3. Journal rankings tend to be exclusive
This is one of those arguments that tend to be exaggerated at times; the exaggerations tend to overstate the problem (see ‘the misunderstood genius) but it’s hard to deny that there is some truth to the claim that it’s harder for young and inexperienced researchers to make it into top journals. Here, I want to be clear: unlike what the misunderstood genius might believe, it’s not because there’s any conspiracy to publish ‘insiders’, certainly not. However, I think we can all agree that crafting a manuscript and doing revisions is something we improve at with practice. 

My own case is an example: as a young PhD student in 2008, I received ‘major revisions’ in two very good journals for some of my first submissions. My revised manuscripts were subsequently rejected. But… my revisions were very poor! I had never revised before, I didn’t quite know how to do it, and made several rookie mistakes. With time, I became better both at doing manuscripts and at revising and started getting accepts as well (while still continuing to get rejects, obviously ;-)). If you forgive me for referring to my own research: in an article published with Carlos Diaz Ruiz and Lisa Peñaloza we discussed “exclusivity by practice” rather than the traditional ‘exclusivity by price’. We found that salsa parties were exclusive, not because attending them was expensive but because good salsa dancing skills were needed to enjoy them. In this sense (and not that many other senses…), top journals are like salsa parties. Yes, they are exclusive – but not because of some dark conspiracy to keep someone out, but exactly because good academic writing also is a skill that requires practice.

In the third part next month, we’ll discuss how the rankings are used and the “alternatives”.

[1] For the record, by ‘big eight’ in marketing we meant (in alphabetical order): International Journal of Research in Marketing, Journal of the Academy of Marketing Science, Journal of Consumer Research, Journal of Marketing, Journal of Marketing Research, Journal of Retailing, Journal of Service Research, and Marketing Science.
[2] Alas, ‘Anglo-Saxon here does not mean a journal written in Old English on parchments. For reasons I’ve never understood, it is common French practice to refer to the Anglosphere as ‘Anglo-Saxon’.


Jonas Holmqvist
Associate Professor of Marketing
Kedge Business School









Image: Luís Eusébio

Comments

comments