Article-Level Metrics: An Ill-Conceived and Meretricious Idea


Except twelve of the tweets were bought and paid for.

Many are excited about innovative measures that purport to quantify scholarly impact at a more granular level. Called article-level metrics or ALMs, these measures depart from time-honored computations of scholarly influence such as the journal impact factor. Instead, they rely on data generated from popular sources such as social media and other generally non-scientific and meager venues.

As someone who studies predatory open-access scholarly publishers, I can promise you that any system designed to measure impact at the article level will be gamed, rendering the metrics useless and invalid. For instance, there are already companies that sell Facebook likes — an example is the firm called Get Likes. Predatory publishers are partly successful because of complicit authors, and these same authors will pollute popular metrics just like predatory publishers have poisoned scholarly publishing.

Numerical values like page views will be shamelessly gamed. Workers distributed among low-wage countries will be hired to reload web pages thousands of times, deceitfully increasing the page views of a particular article. Previously-unknown researchers will suddenly boast more Twitter followers than Neil deGrasse Tyson because they will pay companies to add bogus followers to their social media accounts, and these bogus followers will like and share their articles, actions that will be counted as part of the metrics.

The general public lacks the credentials needed to judge or influence the impact of scientific work, and any metric that relies even a little bit on public input will prove invalid. Article-level metrics will likely grant high scores to works on climate change skepticism and intelligent design, groundlessly raising pseudo-science to the level of science, at least in terms of measured impact. There are already numerous questionable publishers willing to publish articles on such topics. Web-based polls are sometimes gamed in this way, with people emailing all their friends asking them to vote a certain way on a web-based poll.

Moreover, popularizing article-level metrics means articles about Bigfoot and about astrology will likely register a greater impact than articles about curing cancer and discovering the nature of dark matter, for there are many more people interested in popular topics than there are interested in scientific ones.

In late 2012, a group of publishers organized an attack on me and my work. They effectively used various internet tricks, such as email spoofing. They created hundreds of bogus blogs to falsely accuse me of fraud. The high number of fake blogs they created multiplied the impact of their attacks, and many believed the lies they spread. Article-level metrics will be ruined by this same type of abuse. Indeed, I envision articles in predatory journals miraculously getting very high altmetrics values.

Jason Priem

Nice idea, but please be realistic.

As a way to measure the impact of scientific work, the journal impact factor still has great value. Indeed, the true impact of science is measured by its influence on subsequent scholarship, not on how many times it gets mentioned on Entertainment Tonight or how many Facebook likes it gets in the Maldives.

It’s quite possible that some are supporting article-level metrics just because they want to undermine Thomson Reuters, the publisher of Journal Citation Reports, the product that includes impact factor information. Many also blindly support anything that’s new, regardless of how legitimate or enduring it may or may not be.

Many new bogus impact factors have been introduced lately, including the Global Impact Factor (GIF) and the Journal Impact Factor (JIF). More will likely appear. Bogus article-level metric products will certainly arise as well. Without rigorous vetting and quality control, no new scientific impact measure will be successful or valid.

Recently, I have noted the appearance of what I call “article promotion companies.” These are discipline-specific websites that spam the authors of scholarly articles with offers to promote their articles through the promotion companies’ websites. An example is the company Educational Researches. They generally charge $35 to promote a single article. Many email me asking about the ethics of these services. Certainly many more such services will appear if article-level metrics catch on.

Article-level metrics reflect a naïve view of the scholarly publishing world. The gold open-access model has introduced much corruption into the process of scholarly communication, so we should learn from this and avoid any system that is prone to gaming, corruption, and lack of transparency, such as article-level metrics.

48 Responses to Article-Level Metrics: An Ill-Conceived and Meretricious Idea

  1. Stephen B says:

    Jeffrey: I love your blog and think you are doing a great service to the scientific community. One question which often comes to my mind when I read your articles though, is how traditional publishers and academic departments could be accused of the same thing. For example, what is the difference between paying one of these companies to advertise your journal, and paying Google to advertise your journal? (As I’m sure several journals do.) Or what’s the difference between paying one of these companies to advertise your department’s research and employing a director of communications whose job is to promote the department’s research? (As our department does.) I agree with you that there is a difference, but in each case, money is spent on increasing “article impact”. My view is that one is ethical and the other isn’t, but it’s far from clear cut, and there are likely to be mainly ethically ambiguous cases.

    PS. I just learnt a new word from the article title – thanks!

  2. The obvious aim of this deluge of new metrics is to exacerbate the vanity of researchers, publishers and institutions. Dr X will claim he has a paper cited 500 times (and hide his long list of never-cited publications), journal Y will promote a bright impact factor close to that of Nature (but computed via a powerful secret algorithm, patent pending. Don’t ask for details), and University Z will publicize the excellent 3rd place obtained in the last release of the worldwide ranking W (actually, W is a sub-sub-ranking of a very confidential study, which compare a universe of five universities. Of course, ranking W is financially supported by University Z).

  3. […] “Many are excited about innovative measures that purport to quantify scholarly impact at a more granular level. Called article-level metrics or ALMs, these measures depart from time-honored computations of scholarly influence such as the journal impact factor …” (more) […]

  4. mac says:

    Jeffrey, while I usually enjoy your posts here, I must disagree with at least one key item in this one. I don’t know if you’re unfamiliar with the Impact Factor’s ins and outs, or if you’re choosing to ignore them, but the biggest reason that these new metrics are being developed is because the IF is (and has proven to be) manipulated by respected journals for decades. The algorithm is not hard to manipulate, and journals have done it over and over again; often even after being called out for having done it.

    The other big problem with the IF is that it is always at least a year behind. In today’s data-currency culture, a metric that’s a year old is largely irrelevant.

    I’m not saying that the newer metrics are good. I’m just saying that, despite too many academic Deans using IF as a measure of publishing success and tenure allotment, the IF is not in any way a gold standard metric.

  5. Jurgen Ziesmann says:

    You write: “these measures depart from time-honored computations of scholarly influence such as the journal impact factor”. Of course they do … any new measure must by definition depart from the old ones and as they are new they cannot be time-honored. And time-honored certainly does not mean “best possible ever” – Journal Impact Factors are also nothing else than a means to generate business and cannot be trusted. In fact I am quite doubtful that the journal impact factor tells anything about the quality of a single published article … I can take two my own articles as perfect example … both of them published in the same journal about a year apart … one cited by now nearly one hundred times, one exactly one time … but both with the same journal impact factor. Obviously one had impact and the other one not.

    Yes, the field of publishing has become messy. Yes, it is much more difficult to evaluate the quality of any given article. But change from the slow, outdated, and extremely overpriced system of journal publications exactly the same way as 70 years ago (just also online), a system that robs the authors for their copyright and of any profit generated from their work while overcharging the readers and limiting access to publicly funded research results – is absolutely overdue.

    Personally I think a way of publishing scientific research with non anonymous post-publication review – a way that generates a true dialogue of the authors with the questions readers have about the published research – should be very possible to achieve. It would free from the limitations of the peer review – where some anonymous reviewers clearly abuse their power and anonymity to block publication of research that goes against their pet hypotheses and theories. Open access, open discussion, no hiding behind anonymity … sounds very attractive to me.

  6. Genaro japos says:

    Article influence is better measured by looking at actual citation counts per article which google scholar readily provides. Then, we click links to find who are citing the article. Then we check the h index of the authors citing the articles. From these, we would know that a good article is one that is cited by the experts of the field often with h index of 10 or higher.Open access journals enjoy very high page views and downloads but translate to very low citation counts. Researchers know better, open access journals have low credibility. They would rather use their university library subscription to proquest, thomson, wilson web, among others.

    • Unfortunately, it has been proven that citation counts in Google can be gamed pretty easily.

      • Genaro japos says:

        Thank you for the info that citation counts can be gamed easily. I did not kow this and please share your sources on this. The good side of google is you can click to trace who cites the article and you gain access to the articles that cited the paper.A reader finds if the the article is cited by grey literature or by credible sources. Traceability is an important component of metrics. However, the scientific literacy of researchers needs to be upgraded since many still do not even know google scholar and other online sources, and journal and article metrics. Basically, in developing countries, the print source is still popular and unpublished theses and dissertations. Most journals are still in print format.

      • Genaro japos says:

        Thank you dr beall. I have posted the source in my linkedin and facebook accounts. I will also send it to my researcher contacts. The info is well written. Now i know and thank you for educating me.

  7. Ron Davis says:

    Dear Beall, congratulations!
    You are doing a fantastic job as to castigate the things is a very good and easy job. I would like to point out, that rather saying don’t do this, you should offer a solution to this so-called crisis, as i don’t believe any crisis in publication industry. This is the era of internet, people are educated and civil, they know who is right and who is wrong. If a rational author publish in a under grade journal, he would never publish again in that, off course. As far as Thomson Reuters is considered, i believe that market monopoly pushes economy into hell. So, Scopus, google metrics etc are doing good towards the development. One thing more, have you ever criticized the Thomson Reuters Money Gambling portfolio known as Reuters. You should direct your attention toward better ideas to bless the nation.

    • I never said it was a crisis, so please don’t put words in my mouth. Also, is your name really Ron Davis? How does someone with such a name write in such unidiomatic English?

      • Ron Davis says:

        Ahh. Am i here to use idioms? Off course NO. Most of the people just come here to enjoy your handmade crafts. Ask to people that what your blog portray? It clearly reflects that one publisher is doing this other is doing that, and all are doing nothing but just making market fool, and the only publisher doing good at the surface of planet is Mr.Blogger. You just need to learn the publication ethics, never do a personal attack to your viewers. After all your viewers made your blog worthy. I really feel your non-human attitude.

      • You have signed your name as “Ron Davis.” What is your real name? Why are you afraid to use your real name?

      • Ron Davis is of course not his/her name, and he/she won’t have the courage to reply. ;-)

      • I used my real name, but I could not post a comment anymore, why??

        Jeanne Adiwinata Pawitan
        alias: Galuh Sarasvati

      • Dear Mr Beall,
        Thanks that I am not black listed anymore. I like reading your posts, and gave comments. I did not understand that many of my comments could not be posted, though I tried many times, as I did not feel that I ever attacked you. May be you do not like my comments that you seem to protect established big publishers.

        However, I have a feeling that your comments are a little bit racialist, That why (maybe) some people like to use a name that sounds English/Americans.

  8. Jeffrey, I find your arguments against the current implementations of article-level metrics somewhat compelling. It does seem rather silly to measure a paper’s worth by the number of page views, tweets or Facebook likes it receives.

    However, I still believe measuring impact at the article-level makes more sense than measuring it through the Impact Factor of the journal it was published in. As Jurgen Ziesmann writes above, two articles in the same journal can be received quite differently. The Impact Factor of the journal is not a viable shortcut to judge the value of a paper. I believe we should support the development of new article-level metrics that do make sense, such as a reliable citation index.

    Meretricious is a fine word, by the way, thank you for that.

  9. Mob says:

    Dear Jeffrey Beall

    I have read your recent posting against OA publishers and would like to
    express my personal opinions on your so called predatory fighting blog

    1. You seem to plan to kill all OA publishers on infancy stage. Many motivated people may wish to start a good quality OA publishing firm but may have no good experience. You plan to prevent all these people having any progress.

    2. Your most arguments are premature, just because an OA publisher publishes one or two plagiarized papers does not mean the publisher must be blacklisted. You just need to provide awareness and inform publishers on your website that scientific community does not like it and ask them to be more careful in future. Many OA publishers try to use free materials such as gmail to reduce their expenses, their website may have some broken links, they may not be familiar with more professional stuff
    like having metadata for each article to promote their job, properly.
    These issues do not mean they are unqualified to run a publishing firm.

    3. I personally receive invitation from ELSEVIER to publish new
    paper on their recently started journals and it does not mean ELSEVIER is
    spamming me, but if a new OA invites me to submit my paper, I must
    consider it predatory publisher, why?

    4. There are many OA journals published by ELSEVIER, which are low quality, “Procedia – Social and Behavioral Sciences” is just an example of it. Why don’t you ever criticize journals which are published by big publishers? Do you have any financial relationship with them?

    5. I wonder who pays your legal affairs, they must be rich enough to feed you well, Honestly, my gut feeling is telling me that you are hired by some people just to kill the entire OA publishers.

    Finally, only time will tell us more about the nature of your job, many
    OA publishers in your list will do well in my opinion and you will be
    blamed by many for having many bias judgement. Be brave and provide response instead of just removing the entire message. Provide evidence that shows you are not hired by some people whose benefit has been jeopardized by emerge of OA publishers. I admit that some of OA publishers are criminals but your list contains many OA publishers who wish to build good quality work.

    • If you are receiving invitations to submit papers to Elsevier, than that is spam.
      What is your name? Do you have any association with one of the publishers on my list?

      • Lakshmi says:

        Dear Jeffrey,
        Your answer shows your immaturity. It is not that anyone who are not supports you are having association with your publisher list. He may be a researcher/academician. People are educated and civil, they know who is right and who is wrong. If a rational author publish in a under grade journal, he would never publish again in that, off course.

      • Dear Mr. Beall,
        I share some of the of Mob’s opinion. I feel that you want to kill the new publishers, even before they are born.

      • When it is very clear that new publishers use deceit to lure scholars into submitting papers, then you are correct, I want to warn researchers about the corrupt publishers, and I will be resolute in my listing and warnings. There is no place for corrupt publishers in scholarly communication. New publishers must also be completely ethical and transparent.

  10. […] The problem of article level metrics: they will be […]

  11. Jeffrey,

    While I agree that measures of social media popularity and even usage can be problematic, I do think that article-level citation history is of value.

    To use the example of provided by Jurgen Ziesmann above, after an article is a couple of years old, we will know whether it has 100 citations or 1 citation (or something in-between) — and that information tells us much more about the impact of the paper than the IF of the journal it was published in.

    That being said, the journal IF still has value in the evaluation of articles in the period before a given article accrues citations. While it is a lagging indicator for relative journal impact (relative being the key word here), the IF is a leading indicator for article impact. By this I mean, there is no way of knowing when an article is first published how many citations it will receive. The journal the article is published in provides the reader with an imperfect gauge of likely *averaged* impact.

    I see article-level citations as a useful lagging indicator and journal-level IF as a useful leading indicator (of article impact – the IF is a lagging indicator of journal impact). I think we would be foolish to ignore either, so long as these metrics are being used properly and cautiously. Of course, the only way to really assess the value of an article is to actually read it!

    • Thanks, Michael. These are excellent points. I hope that whatever system emerges it is able to handle data manipulation well.

    • Your distinction of lagging and leading indicators is useful. Altmetrics should provide a better leading indicator of article impact than the journal’s IF. Altmetrics are specific to the article in question, which the IF is not. Altmetrics provide evidence of immediate attention, which the IF does not. It’s more elegant to use altmetrics as a leading indicator of article impact, since altmetrics measure attention at the level of the individual article — just as citations do.

      So, an interesting question would be whether altmetrics predict citations. Folks are already asking whether altmetrics work (e.g., Thelwall, Mike, et al. “Do Altmetrics Work? Twitter and Ten Other Social Web Services.” PloS one 8.5 (2013): e64841). I’ve argued that there are many other important questions surrounding such metrics that we should address (“Developing indicators of the impact of scholarly communication is a massive technical challenge – but it’s also much simpler than that.” LSE Impact of Social Sciences blog, 12 June 2013).

      A less interesting question (to me, anyway) is whether altmetrics correlate with a journal’s IF. My suspicion would be that there is a correlation, and that articles appearing in journals with a high IF also on average get more attention as revealed by altmetrics. So, altmetrics might actually provide some support for the IF.

  12. I think that image is of Jason Priem…of AltMetrics fame? i.e. . I don’t believe that Jason (and his colleague Heather) are pushing article level metrics….rather the fact that scientists should be measured by their contributions to science…not only through their articles. So, their data contributions, their code contributions etc. Its outlined in the AltMetrics Manifesto:

  13. thank you says:

    the more I read your material, the more disgusted i become. it is time to throw this back in the face of the OA advocates: don’t tell me why we MUST give our work away for free and why we MUST stop publishing in journals that work just fine and (in my field at least) cost very little money. instead, tell me what the incredible social benefit is that OA is supposed to grant, because I don’t get it. If you want to be a doctor, you have to go to medical school. If you go to medical school, you get access to journals. If you don’t want to become a doctor, why does society owe you access to doctors’ writings MORE than it owes to the people who have devoted their lives to medicine? Are non-doctors really going to start contributing droves of real medical research that doctors can’t do, and that non-doctors could not do by going to the library? And their right to do that trumps the right of academics to determine their own system of promotion, review, and assessment? Who said so? Why?

    More and more, I think OA is showing its true colors, which is not just antithetical but actually hostile to the academic enterprise and to academic freedom.

  14. Yes indeed, that is spam. Beall is correct. It has nothing to do with being a big publisher or not. I think Jeffrey Beall has clearly spelt out the objective of his blog from the onset: ‘critical analysis of scholarly open access publishing’. Except of course, we are now asking him to start evaluating closed access journals alongside, which is, in my opinion, a different endeavour. Any other scientists/librarian can pick up the task of evaluating closed access journals if he so desire. I strongly believe every endeavour should have a roadmap, a clear-cut objective and that is what Beall’s blog has achieved. If the man wants to start analysing closed access journals as well, it is for him to decide. After all, none of us conceived this whole idea for him in the first place.

  15. This is a big topic and one post cannot cover it all (as we see in these interesting comments), but the basic idea of measuring “impact” by the interest of others is sound. I’ve proposed a *difficult* to game method (with several other useful attributes) here:

  16. […] che si può pagare qualcun altro per truccare sia il numero di citazioni su Google Scholar che  i fattori d’impatto, in particolare quelli alcune riviste open access intendono […]

  17. […] open access publishers (as distinct from legitimate open access publishers like Plos). In a new post, he discusses the rise of a similar phenomenon: the gaming of article-level metrics, including by […]

  18. […] Beall has written a pretty negative blog post about altmetrics – it’s worth checking out if you’d like to balance out some of the more pro-altmetrics articles we’d normally […]

  19. Dear Jeffrey:

    Without a doubt, there are concerns about manipulation of article-level metrics. But that is not an argument for continuing to abuse the impact factor and measure work quality based on average journal citations.

    I think an analogous position to that of your post would be, “Google page rank can be manipulated, so it is far better to serve users random search results.”

    Altmetrics have flaws, and we need to think about them. But such metrics do have a place in evaluating impact of a given research paper; they should be one of the metrics when assessing a work. Meanwhile, the Impact Factor has absolutely no place in such assessments.

  20. […] cloud of titles from the top 10,000 articles listed in the Altmetric database. There was also this blog post by Jeffrey Beall, which was highly critical of article-level metrics – Euan has now written a response, […]

  21. I don’t think the charge that altmetrics can be gamed is a good argument against their use, much less evidence that the idea of altmetrics is meretricious. Let’s not kid ourselves — all metrics can be gamed. But that alone is not a good reason not to use metrics.

    Although altmetrics can be gamed, it would be obvious. Typically, one can see not merely the number of tweets (for instance) that an article generates, but also WHO the tweeters are. I’d imagine that it wouldn’t be too difficult for you, Jeffrey, to identify the bogus tweets that were bought and paid for. The same holds true for blog posts — one doesn’t simply see THAT someone blogged about an article (as when one receives an email from that says “someone found your profile”, but one is left to guess from the key words and the country who that someone is). Rather, one sees precisely WHO blogged. One can even go to the blog and see what was written and respond to it.

    Indeed, one of the best things about altmetrics is that they help us connect with our readers. Personally, I am happy to find that sometimes my readers include non-academics. I think it is meretricious simply to dismiss attention from non-academics as irrelevant. Insofar as altmetrics help connect us with all of our interlocutors, I think they are a meritorious idea.

  22. […] I think Jeffrey Bealle has got this wrong. He claims that altmetrics are an “Ill-conceived and Meretricious Idea.” […]

  23. Costa Vakalopoulos says:

    The potential use of OA and ALMs is as idealistic it appears as peer review, both open to self-interested abuse, both systems if managed properly could achieve what they purport to.
    There is an obvious risk to OA, which Jeffrey Bealle has done well to highlight, but the demand for such a platform has to do partly with the sheer frustration of submitting to journals with 10 percent acceptance rates. There is no doubt about the high quality of many articles thus published, but one gets the impression certainly that name of mentor and institution has a great deal to do with acceptance. These aren’t necessarily dirty words of course, but a high rejection rate probably has less to do with poor quality as it does with control and the impression of elitism. In the day of electronic publishing it is a travesty that journals have such low acceptance rates and the contribution of peer review often appears anti-scientific.
    Unfortunately, predatory publishing appears to be tainting OA and legitimises the often uncompromising rhetoric of those wishing to maintain the status quo instead of finding avenues to improve the current system.
    ALMs appear such a great tool in theory, but there appears to be an urgent need of standardisation that will help the perceived integrity of OAs. Citations appear at face value a good metric, but as anyone who has published knows particular papers are often cited because of prospective desire to be published in the “right” journal.
    It appears that downloads is an objective and genuine measure of interest in a paper that may not translate into a citation, for whatever reason. The problem as I see it is that papers published in less prestigious journals may be ignored in references, but could have a major impact on thinking and downloads are a better index of this. Obviously several metrics are needed.
    The concern for downloads or page views as rightly noted by Dr Bealle is gaming, but the brush is applied widely. I’m not sure it’s practical or possible even but a list of journals using ALMs with a high suspicion of gaming could discourage the practice and put individual article metrics as a legitimate alternative to impact factor.
    Jeffrey has taken the first important step of listing predatory journalism, but he won’t stop it. In the evolution of OA for those committed to a fairer system than the peer review of a small number of elite journals which have vested interest erecting barricades, maybe he also needs to provide ideas as to quality control that is still compatible and not downright antagonistic to alternative ways of publishing and the metrics used. Science would be well served and an attack on gaming that doesn’t do damage to ALMs would seem a good place to start.

  24. […] have sprung up in the wake of pressure for open access.  In August 2012 he published “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.  At first reading that criticism seemed a bit strong.  On mature consideration, it […]

  25. […] unrelated criticism of altmetrics is that they’d be outright gamed and that the scientific world has nowhere close to the capacity to fight spam like google et al. […]

  26. […] these altmetrics an ill-conceived and meretricious idea? By providing this kind of information, isn’t CrossRef just encouraging feckless, neoliberal […]

  27. […] Article-Level Metrics: An Ill-Conceived and Meretricious Idea [Via | Scholarly Open Access […]

  28. […] Not long ago, Jason received an email from an Impactstory user, asking him to respond to the anti-altmetrics claims raised by librarian Jeffrey Beall in a blogpost titled, “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.” […]

  29. […] [12] Article-Level Metrics: An Ill-Conceived and Meretricious Idea. Scholarly Open Access. […]

Leave a Reply -- All comments are subject to moderation, including removal.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: