
Guest article by Jonas Holmqvist.
Journal rankings, much like student evaluations, are one of those things that academics either love to hate or hate to love, and tend to debate a lot. Love them or hate them, they probably impact your career in multiple ways. In this two-piece opinion, this first part attempts to provide and overview and guide to understanding journal rankings. The second part moves into the more contentious area of discussing the upsides and downsides of journal rankings. Before going on, I should point out I’ll mainly take a European approach, especially when talking about national rankings. Not because it is better (or worse), just because it is what I know the best.
We’ll look at three key journal rankings used in multiple countries and education systems: impact factors, the Academic Journal Guide, and the Financial Times list. I’ll also say a few words about more national/regional rankings.
Impact Factors
The annual journal citation report (JCR, not to be confused with Journal of Consumer Research) is intended as a report, not as a ranking. This fact notwithstanding, it is probably the most influential de facto ranking, as it is the only one with a truly global reach. For this reason, I assume we all already know it – but for any reader new to academia, the Journal Citation Report is exactly what the name says. An annual report, usually published in June, to show how often the average article in an academic journal on the list has been cited in other articles. Before moving on, I should mention that the JCR is not the only report of impact factors. Several others do exist (CiteScore, Clarivate etc.) but I dare say the JCR is by far the most impactful (pardon the pun).
Impact factors represent the most neutral but also the most blunt journal ranking. Neutral, because it is not decided upon by any experts but reflects how often articles in a journal are cited. Blunt, for the exact same reasons. This means that, unlike other rankings, the risk of a small group of people prioritizing some journal(s) when deciding on the ranking is virtually eliminated. This is an advantage, but yes, it remains a blunt instrument. If a journal that only publishes a few issues per year features an article that becomes very heavily cited, that one article can sky lift the journal’s impact factor over the next years. I dare say anyone who has followed marketing journals for over a decade has seen some constants (JM is always doing well), some consistent and steady rises (JSR is a prime example), some consistent and steady declines (there’s no denying that Marketing Science has lost a lot of ground over a decade) – but also some ‘up-and-down’ examples, journals who performed very well for a few years on the back of a couple of highly influential articles but then came down again. For this reason, it might make sense to look more at the five-year impact factor. Of course it is influenced by the same tendencies, but the longer the time period, the less sharp are the fluctuations. For impact factors, it’s also important to remember to resist the temptation to avoid using them across disciplines; an impact factor that would be mediocre in one field can be very good in a different one.
The Academic Journal Guide (AJG)
The Academic Journal Guide, previously ABS, is a British list that ranks journal from 4* (the best) down to 1. According to its official statement, the AJG is not based on metrics but instead “informed” by metrics. For each subfield, there is a scientific committee deciding on the rankings; these committees are very small, consisting of 1-4 experts per area. Their recommendations are then decided on by the academic committee, consisting of eight persons. This procedures implies two things: (1) there is no guarantee that the recommendations by the two members of the marketing scientific committee are validated, and (2) the number of people deciding on the AJG rankings is very low.
Looking at marketing in general, the AJG is, in my view, rather solid. As expected, top journals such as JM, JCR, JAMS, JMR are ranked 4*, while category 4 is spectacularly small and only features two journals: JR and IJRM. In fact, JSR is also a category 4 journal but, rather confusingly, it is included in the ‘left-over’ field called sector. Category 3 is much broader, and includes journals such as JBR, EJM, IMM and many more.
It has to be said that the AJG is not particularly favorable for service research. While Journal of Service Research has a (well-deserved) strong ranking of 4, there is no service journal in category 3. Journal of Service Management and Journal of Services Marketing both come in category 2, with Journal of Service Theory and Practice in category 1. Looking at other marketing journals, they all seem to be a category under where they should be. Part of the problem may be that many service journals are not included in the marketing category, but instead in other categories where the members of the committees may be less familiar with them.
The situation of service journals contrast quite sharply with that of tourism journals. There are of course outstanding tourism journals that have well-deserved good rankings, and correspond to other rankings. Having said that, the rankings of some tourism journals in the AJG are certainly outliers compared to other rankings. I don’t think it’s particularly controversial to state that service research is a bit disadvantaged and tourism research a bit advantaged by the current AJG ranking.
The Financial Times List (FT)
The Financial Times list constitutes a list of the 50 top journals that FT takes into consideration when making their annual rankings of business schools. For this reason already, it is a ranking that many deans and administrators tend to care about quite strongly, as a publication in the journal on the FT list directly benefits the business school/university in future rankings (which in turn influences future student applications). The FT list is much simpler than many other rankings in that it is a clear “in-or-out” ranking. There are no categories, and the 50 journals are presented in alphabetical order.
Given that the FT list covers all sectors of business and is limited to 50 journals, it is very selective. It goes without saying that all journals on the list are very good journals publishing leading research. The marketing journals currently included are JAMS, JCP, JCR, JM, JMR, and MS. In other words, there are journals open to service research on the FT list although no journal dedicated to service research. As in the case of AJG, there is committee making decisions on the journals. Prior to this, they ask for recommendations from researchers at some 200 business schools for both journals to drop from the list and for journals to include. One could of course always ask if the current 50 journals are the right ones – then again, the fact that the committee always invite suggestions both for journals to exclude and journals to include means that the list is rather good reflection of the current consensus. Last but not least, it has to be said that there is often quite a bit of campaigning to include journals on the FT list, given its prestige.
Overall, the FT list could perhaps best be summed up by saying that ‘All journals on the FT list are great, but not all great journals are on the FT list’.
National rankings
In addition to the rankings above, there exists a myriad of national rankings. To be fair, the AJG is also a national ranking, although one that is so widely used in many other countries that it has achieved a more prominent statue. There is not much for me to say about national rankings – they can of course have a strong influence on researchers in the countries. Quite often, there tend to be a bit of national bias, although this is sometimes exaggerated. For example, I’ve often heard academic friends ask if it’s true that there are French marketing journals ranked as high as JM or JCR in the French national ranking. That’s a myth, not true at all. It is true that the French national rankings include a few French marketing journals that don’t have an impact factor, but none of these journals feature in either of the top two categories, and only one in the third category. So a slight national bias, probably – not unlike AJG, in which some traditionally British journals also tend to be a little bit better ranked than elsewhere. By and large, I don’t see national bias as a major issue in any of the national rankings I know; the main issue is instead that they are rarely used outside the country.
How to relate to the different rankings
Needless to say, there is a pretty good correlation between these different journal rankings. For example, Journal of Marketing, Journal of Consumer Research, and Journal of Marketing Research always tend to be in the top category of each ranking. That is not to say that there aren’t some discrepancies between rankings, and sometimes these can be both confusing and a bit hard to navigate.
To exemplify, I’ll use two very good journals: Journal of Service Research and Journal of Business Ethics. I’m sure neither journal needs any introduction, both are renowned for publishing excellent research. In the current rankings, however, they differ quite a lot. Looking first at impact factors, both have good impact factors but JSR is clearly ahead. Next for AJG, where JSR is in category 4 alongside great journals such as Journal of Retailing or International Journal of Research in Marketing. Journal of Business Ethics is one category below, in category 3; also a good category, featuring other good journals such as Journal of Business Research or Industrial Marketing Management. If we turn to the Financial Times List, things are turned around. Now it is Journal of Business Ethics that is better ranked, joining the likes of JM, JCR, JAMS, and JMR on the FT list wile JSR is not (yet) included. So is JSR or JBE better ranked? The question is of course impossible to answer, as is the case for many other journals. It will depend entirely on which ranking our departments prefer. In a department going by impact factor or by AJG, JSR will have the edge, while JBE comes out ahead in departments looking more to the FT list. As academics tend to be rather mobile, it is not at all unusual that a researcher would move from a system favoring one ranking to a system favoring another, and suddenly find their publication list devaluated next time they’re up for promotion.
Personally, I never look at rankings when submitting. I don’t think it’s a constructive approach and I recommend considering the fit between the journal and the manuscript. Having said that, I’m lucky enough not to be under any strong administrative pressure to publish in selected journals (it is beneficial, yes, but not required). I know this is a privilege not all colleagues in academia have. Young scholars in particular need to consider their CV for future recruitment, and many scholars’ promotions are dependent on where they published. This is where it gets tricky, at least in Europe. At the moment, there seems be (very roughly) a three-way split between systems favoring AJG, favoring FT, favoring some national ranking, or a combination of rankings. For FT, the key question is again easy: is the journal on the list? For AJG, my impression is that most academic institutions using it largely tend to consider only journals in categories 4* and 4 for promoting and hiring faculty. Again, I don’t think the rankings should decide to which journals we submit our research – but there is no denying that publishing in journals on the FT list and/or highly ranked in AJG often benefit the authors.
To conclude this first part, the current situation is that service journals tend to do very well when we look at impact factors – both JSR and JOSM have outstanding impact factors – but less well in the rankings decided on by committees. So what does this tell us? As a service researcher, I may be a bit biased but I believe it reflects that service research is doing well. The impact factors are updated every year and reflect the past few years, unlike the rankings that are updated less frequently and where history plays a larger part. Put differently, the rankings tend to lag behind reality a bit, as journal need to ‘prove’ their quality before being upgraded. As service journals continue to show excellent impact factors, I want to believe that future updated of the FT list, AJG and other rankings will reflect this quality.
In the second part (read here), we’ll discuss journal rankings critically, focusing on both positive and negative aspects of ranking journals.

Jonas Holmqvist
Associate Professor of Marketing
Kedge Business School


Insightful piece Prof. Jonas. Just a small correction that the Journal Citation Report (JCR) is compiled and published by the Clarivate analytics company. The equivalent Citescore is produced by Elsevier using data from Scopus. I think your blog mentioned Clarivate and JCR are different.