Dead Metrics

Scholarly metrics come, but some never go — even when they should. Here are three examples of dead metrics, scholarly metrics that are still around yet infrequently used and of little or no practical value to researchers or librarians.

SCImago Journal Rank

Meaningless metric.

1. The SCImago Journal Rank (SJR)

 

When was the last time you looked at a journal’s SJR ranking or made a decision based on it? Never? Me either. While this metric was designed with the best of intentions and the latest bibliometric knowledge, it has never caught on.

Predatory and low-quality open-access journals included in SJR love to display the little box the service automatically generates using a supplied link. They exploit this metric and others to make themselves look more legitimate than they really are and to attract manuscripts, which they quickly accept and invoice the authors for.

 

engenfactor

An interpretation of the Eigenfactor logo.

2. The Eigenfactor

 

The Eigenfactor is a perfectly over-engineered metric, so perfect that it surpasses any practical use by everyday researchers and librarians. I think it’s a metric chiefly used just by bibliometricians (and mainly just to get publications). I know of no academic library that utilizes this metric.

Journals frequently report their Impact Factors on their websites and in their promotional materials, but have you ever seen a journal report its Eigenfactor? I have not.

The Eigenfactor is a theoretical metric that has never found a practical purpose or an audience. It is floundering.

 

Web of Science

Lots of clicking for a lower value.

3. The Web of Science h-index

Have you ever used Web of Science to calculate your h-index? If you did, you’re one of the very few, and you probably noticed that it took a lot of clicks to get to the datum, the process was unintuitive, and the value was lower than expected — much lower than the value Google Scholar calculates for the same metric.

For example, Web of Science calculates my h-index as 4. Google Scholar calculates my h-index as 14. When two companies calculate and supply the same metric, naturally most will report the higher of the two.

This is because Google Scholar is much less selective than Web of Science, so it records more citations, making its h-index calculation — which based on the number of papers published and citations to them — significantly higher than Web of Science’s.

(Also, will citation data from Thomson Reuters’ new Emerging Sources Citation Index be used to calculate researchers’ h-indexes? If so, then everyone can expect his or her Thomson Reuters h-index to go up.)

Discussion and Conclusion

Some metrics, including some of those described here, were not created to fill a need but rather to compete with the Impact Factor. In doing so, they created complex metrics, like the Eigenfactor, that betray the simplicity of the Impact Factor and function essentially as the scholarly metrics version of a Rube Goldberg machine.

Based on my experience serving on tenure committees, the only metrics I have seen reported in candidates’ dossiers have been:

  1. The Impact Factor
  2. The raw number of citations their publications had received, drawn either from Google Scholar, or calculated manually, using various databases.

In their dossiers, I observed that tenure candidates would list their publications, and following each reference, they appended the journal’s Impact Factor.

For raw citations, they usually stated the total number of citations as a single figure, something like “According to Google Scholar, my articles have been cited 250 times.”

[I do understand and have documented that Google Scholar is easily gamed.]

I acknowledge the weaknesses of the Impact Factor, and there are even a few journals on my lists that have legitimate Impact Factors, so it’s clearly not a measure of quality in all or even most cases.

But at least the Impact Factor is a living metric, understood by most, bearing a practical, demonstrated, and enduring value. It’s not a dead metric.

 

 

 

 

38 Responses to Dead Metrics

  1. Steven N. Blair says:

    I don’t understand why you say the Web of Science h-Index is a dead metric. I find that it is used a lot, and frankly I think it is a good metric.

  2. Susan Ariew says:

    A lot of folks at my school use SCImago because it’s inclusive in terms of offering rankings of journals in the humanities not covered by the JCR and other sources. The Q1-Q4 ratings are used in promotion-tenure decisions here. –SAA

  3. I much appreciated the issue raised by Jeffrey about the h-index provided by Web of Science. Indeed, to me, it is utterly nonsensical!! In my field, across gravitational physics and astronomy and astrophysics, the well known and authoritative SAO/NASA-ADS database yields-for free!!-not only the h-index but also a full array of other indexes as well which are, on average, better capture the scientific value of an individual. Suffice it to say that it yields also the TORI index which accounts for self-citations. And the coverage of SAO/NASA-ADS seems much broader than Web of Science since it often returns much higher values of the h-index! In my case, while Web of Science gives me a relatively meager h = 24 or so, ADS returns h = 35. Nearly the same occurs for many other researchers in my field. So, why one should pay Web of Science to have a quite incomplete and inaccurate product?

  4. regcheck says:

    If the impact factor is “clearly not a measure of quality in all or even most cases,” then doesn’t it deserve to be dead? Do any of the alternatives you describe as dead correlate better with quality?

    • BGranville says:

      Just because the Impact Factor often fails, doesn’t mean the alternatives are any better. All have their flaws, so what difference does it make which metric is used? Might as well stick with the one that has been in common usage for the longest, which is the Impact Factor, if there is no convincing alternative.

      • regcheck says:

        This does not seem to be a scientific approach to the problem. Why give the incumbent a presumption of validity when there appears to be ample evidence that it has poor quality? Research fairly comparing the alternatives makes more sense.

  5. coppenheim says:

    Sensible comments here from Jeffrey. It is worth noting that the h index IS used quite a bit – for example on individuals’ cvs – and I know some tenure/appointments panels have taken them seriously (alas). Another reason why Google Scholar gives inflated h index scores is because the same article or output often appears more than once in a Google Scholar search output.

    • Probably, you are right. There is a scientist active in a subfield of mine whose Google Scholar h index is relatively high (around 30), contrary to both ADS and Web of Science whose h index values are around 20 or so for him.

  6. By the way, I would find useful to enlarge this topic with other metrics as well. h-index alone is insufficient to evaluate a researcher. Why not spreading more the use of the TORI and RIQ indexes, which allow to discard the bias due to the self-citations?

  7. tekija says:

    In my field – medical sciences – the WoS h index is accurate and also what tenure and promotion committees here require. Technical problem is that it is fairly often tedious to isolate an investigator’s work from those of a colleague with a matching name/initials. Google Scholar gives me ten points more for a h index – but when you dig into where it comes from, it has mostly entertanmentvalue compared with the “hard fact” WoS version. As noted above, every field must choose their best base for calculating h (for what it is worth). Among other things, Scholar version is boosted by citations from predatory journals…

  8. jeffollerton says:

    I agree with Steven Blair (above) – the WoS h-index (whilst not perfect) is useful and certainly not “dead”. For those fields that are well covered by WoS it provides a useful way of comparing individuals. Yes there are things which are not indexed but that’s true for all individuals, making the WoS h-index a conservative, but level, playing field. Google Scholar is much less conservative and has errors as others have noted. An individual’s “true” h-index is going to be mid-way between those two.

    However another point worth stressing is that it’s not the metrics themselves that are the issue, it’s how they are used and interpreted. By coincidence a colleague in my university has this week written a post on this very topic that you may find of interest:

    http://researchsupporthub.northampton.ac.uk/2015/12/09/the-metric-tide-are-you-using-bibliometrics-responsibly/

  9. Ahmed says:

    Researchgate is providing metrics for researchers and give you exact number of citations for your work
    I prefer researchgate for personal impact factor

  10. Alex SL says:

    There is a risk here of conflation on two fronts:

    First, the question of whether the h-index is a good metric with the question of whether the Web of Science h-index is impractical and unattractively low.

    Second, the question of whether a metric deserves to be dead because nobody likes it with the question whether a metric deserves to be dead because it doesn’t make sense.

    At least as far as the h-index is concerned, my feeling is that it depends on the field of research. It is quite popular among some kinds of researchers. If you combine that with the fear that Google might over-estimate, the ease with which the metric is calculated in ResearcherID, and the belief that it measures something worthwhile (you need consistently well cited output as opposed to one stroke of luck to have a decent h-index) it looks quite viable.

    My personal opinion is still that committees should not rely on any of these metrics to evaluate candidates. There is a lot of very valuable scientific work that does not get cited much even as it gets used a lot, starting with botanical field guides or text books.

  11. Darren says:

    Yes, google scholar is almost certainly LESS reliable than WoS because it is less selective (arXiv articles, etc, can be counted when they are not yet in the actual reviewed literature, for example). It’s not that hard to work it out yourself and compare.

  12. Cameron Barnes says:

    Don’t forget, Google Scholar also indexes many more legitimate peer-reviewed journals than WoS, especially those in languages other than English. If your Google Scholar h-index is higher than your WoS h-index, the explanation may be this simple. However, I am highly sceptical of the h-index, as evidence to show that it actually measures research impact in any meaningful way is lacking.

  13. Ian Scott says:

    I have read and respected this website, even using it for talks about productivity in publications and how to publish effectively. This comment is not designed to “go against” the author’s view of metrics and decision-making. However, there is an erroneous line of thinking in this article, which frankly seems like a plug for Thompson Reuters, given its lack of references. What I’m referring to is the “decision making” part of your article because SciMago is much more than a metric. Of course you don’t like the copy-paste “impact factor-esque” logo that predatory journals can put on their sites, however, many respected publishing groups use SciMago as an alternate metric on their webpages. Elsevier, for example.
    Secondly, SciMago for me is a tool to help match journals to subject areas. SciMago contains the Scopus database which includes Web of Science and other publications that for many universities are considered in their quality rankings. (Latin American university quality rankings can even be based on Scielo articles).
    SciMago is not designed to “complete” with Impact Factor. It is designed as a comparison between the two, not weighting self citation as heavily as external citation. SciMago is also based off of the Google Page Rank algorithm, which is an extremely important consideration for authors when planning article classification.
    Please consider this as a clarification rather than an argument against your point of view, given that I have known and respected it for quite some time. Thank you for your work, and I hope that this comment can be subject to peer-review by the users of this website!

  14. Derek says:

    You asked “Journals frequently report their Impact Factors on their websites and in their promotional materials, but have you ever seen a journal report its Eigenfactor? I have not.”
    Here are two that report their Eigen factor ranking: http://www.weai.org/journals.html
    Elsivier journals sometimes report them: http://journalinsights.elsevier.com/journals/0144-8188/
    And http://journalinsights.elsevier.com/journals/0047-2727/
    Whether potential contributors and readers pay attention to them is not something I know.

  15. At least in my case, Google Scholar returns a h-index of 34, close to NASA/ADS (35). Nonetheless, I’ve noticed that Google Scholar seems to fail to capture several citations which I have in ADS.

  16. Barry says:

    When applying for research grants from the Taiwan the Ministry of Science and Technology, the Ministry takes into consideration a researcher’s previous publications that are indexed in the following: SCI, EI, SSCI, AHCI, TSSCI (a Taiwanese index of social sciences journals–articles mostly in Chinese), SCOPUS, THCI CORE (a Taiwanese index of humanities journals–articles mostly in Chinese), and SCIE. The Ministry takes into consideration the IF for journals that have them. For both journals indexed in databases that calculate IF and those that don’t, they also consider the journal ranking within subject categories.

    Researchers are asked to provide the number of citations each of their articles has received. Of course, an article that contains something controversial could receive a large number of citations; citations does not necessarily measure an article’s merit.

    Some arts and humanities journals are only indexed in SCOPUS.

    Personally, I publish in SSCI, AHCI, and SCOPUS indexed journals. These can all earn me “research credit”. I aim my research to be published in journals whose scope fits my research…sometimes this just happens to be a SCOPUS indexed journal. I have even had the experience of targeting a SCOPUS indexed journal that eventually began being indexed in SSCI/AHCI. It does take time before a journal can get accepted into one of the Thompson Reuters databases.

    At least the humanities and arts researchers based in Taiwan are using the SJR rankings. I use the rankings on my applications for research grant proposals submitted to the Taiwan Ministry of Science and Technology and I list the SJR value after my publications on my CV. In addition I also indicate which of my articles are published in journals that are indexed in SSCI and AHCI journals and their IF as well.

    I don’t like it that predatory publishers are getting indexed in SCOPUS and thus have SJR values and I understand how the graphic can easily be put on the predatory publisher journal webpages; however, I don’t think the exploration of the SJR graphic is enough to consider it as a dead metric.

  17. We are using SNIP as the preferred impact factor at my school as it is comparable across disciplines. We are now moving towards SJR as it has become more reliable, as a recursive index is a better measure of importance. My WoS H-index is 24, Scopus, 26, and GS, 39. The GS one is for sure inflated a bit by the noise in the GS database. I really don’t know why you say these are dead metrics just because librarians don’t use them that much.

  18. We are using SNIP as the preferred impact factor at my school as it is comparable across disciplines. We are now moving towards SJR as it has become more reliable, as a recursive index is a better measure of importance. Also it is available for free on the web from journalmetrics whereas IFs require a subscription to WoS and logging into the library etc…

    My WoS H-index is 24, Scopus, 26, and GS, 39. The GS one is for sure inflated a bit by the noise in the GS database. I’m an environmental economist.

    I really don’t know why you say these are dead metrics just because librarians don’t use them that much.

  19. Ahmad Hassanat says:

    I personally do not believe in any metric, they all have their pros and cons, none is perfect, as none is telling the truth about the researchers.
    It is one paper that I am seeking, that one, which will give me a universal recognition in my field with perhaps a prestigious prize.

  20. Ghazal says:

    Dear Mr Beall

    You seem to be unfamiliar with the concept of SJR. SCImago Journal Rank (SJR) is a prestige metric based on the idea that ‘all citations are not created equal’. Citations are weighted, depending on the rank of the citing journal. This far better than traditional impact factor (IF) reported by ISI.

  21. RL says:

    I agree with you, Sir, Beall!
    All these metrics, including the impact factor, should be dead because they do more harm than good to science, but also because they are unreliable.

    Quotes:
    1) “In their dossiers, I observed that tenure candidates would list their publications, and following each reference, they appended the journal’s Impact Factor.”

    2) “Journals frequently report their Impact Factors on their websites and in their promotional materials”

    These are some of the ridiculous symptoms of the impact factor mania.
    It is not because that journals report their impact factors on their websites for promotional goals that the impact factor has a reliable value or legitimacy. Some journals look for money and sponsorship, so they display their IF on their pages.

    Take a look at:
    http://mbio.asm.org/content/5/2/e00064-14.full

    3) “For raw citations, they usually stated the total number of citations as a single figure, something like “According to Google Scholar, my articles have been cited 250 times.””

    This is also another symptom of the unreliable system of metrics:

    Have a look at:
    https://blogs.ch.cam.ac.uk/pmr/2015/12/07/article-level-metrics-how-reliable-are-they-you-have-to-read-the-paper/

    http://www.ease.org.uk/sites/default/files/diversity_of_impact_factors_-_h-index-_armen_yuri_gasparyan-2012.pdf

    http://www.tandfonline.com/doi/abs/10.1080/08989621.2015.1127763

  22. wimcrusio says:

    Any measure that tries to reduce someone’s research impact to a single number is bound to be a tremendous oversimplification. As an aside, in France the h-index is alive and well and often asked for on, for example, grant application forms…

  23. jacek@amu.edu.pl says:

    I can response only this way, as I am blind and must use special software. I cannot agree more; indeed, the point made by the author of criticisms against certain indexes should be even strengthened; what we are dealing here is kind of quantophrenia , based on an aborted myth that quality can and ought to be brought down to quantity. The latter is most useful from a bureaucratic standpoint, to be sure, but the managers of science should not be conflated with science itself nd their evil ways should be condemned and discarded once for all. Is Marx two times better than Weber, or the other way round? Those are utterly absurd questions, and yet when their counterparts are being applied to you and me, we do not protest and take it for granted. prof [Ordinary, the highest academic rank in most European countries] Jacek Tittenbrun

  24. Many journals, letters, etc in the IEEE suite report IF, Eigenfactor and Article Influence Score on their individual websites. http://www.ieee.org/publications_standards/publications/periodicals/journals_magazines.html

    One of the better journals in my field, Journal of Adolescent Health, reports an eigenfactor: http://www.jahonline.org/

    Like Derek (above), I do not know if potential authors or readers are influenced by the eigenfactors.

  25. […] From Scholarly Open Access – [Read more] […]

  26. Paula says:

    I am curious about which ‘predatory’ journals are in SJR as I was under the impression that SCImago rankings are based on Scopus data. I know, from first hand experience, how thorough the Scopus vetting process is when you apply to have a journal included in Scopus. At our institution, researchers are advised to aim for Q1 and Q2 journals when choosing where to publish so I would hardly call it a dead metric. However, I would agree that using the SJR (or Impact Factor) to assess a specific article makes no sense.

  27. jacek@amu.edu.pl says:

    Dear Sir,

    the enclosed list of journals may be of some interest to you and the readers of your OA bulletin, I think.

    with kind regards,

    prof Jacek Tittenbrun

Leave a Reply -- All comments are subject to moderation, including removal.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: