Img Source: altmetric.com

Many are excited about innovative measures that purport to quantify scholarly impact at a more granular level. Called article-level metrics or ALMs, these measures depart from time-honored computations of scholarly influence such as the journal impact factor. Instead, they rely on data generated from popular sources such as social media and other generally non-scientific and meager venues.

Img Source: intranet.royalholloway.ac.uk

As someone who studies predatory open-access scholarly publishers, I can promise you that any system designed to measure impact at the article level will be gamed, rendering the metrics useless and invalid. For instance, there are already companies that sell Facebook likes — an example is the firm called Get Likes. Predatory publishers are partly successful because of complicit authors, and these same authors will pollute popular metrics just like predatory publishers have poisoned scholarly publishing.

Numerical values like page views will be shamelessly gamed. Workers distributed among low-wage countries will be hired to reload web pages thousands of times, deceitfully increasing the page views of a particular article. Previously-unknown researchers will suddenly boast more Twitter followers than Neil deGrasse Tyson because they will pay companies to add bogus followers to their social media accounts, and these bogus followers will like and share their articles, actions that will be counted as part of the metrics.

The general public lacks the credentials needed to judge or influence the impact of scientific work, and any metric that relies even a little bit on public input will prove invalid. Article-level metrics will likely grant high scores to works on climate change skepticism and intelligent design, groundlessly raising pseudo-science to the level of science, at least in terms of measured impact. There are already numerous questionable publishers willing to publish articles on such topics. Web-based polls are sometimes gamed in this way, with people emailing all their friends asking them to vote a certain way on a web-based poll.

Img Source: reprintsdesk.com

Moreover, popularizing article-level metrics means articles about Bigfoot and about astrology will likely register a greater impact than articles about curing cancer and discovering the nature of dark matter, for there are many more people interested in popular topics than there are interested in scientific ones.

In late 2012, a group of publishers organized an attack on me and my work. They effectively used various internet tricks, such as email spoofing. They created hundreds of bogus blogs to falsely accuse me of fraud. The high number of fake blogs they created multiplied the impact of their attacks, and many believed the lies they spread. Article-level metrics will be ruined by this same type of abuse. Indeed, I envision articles in predatory journals miraculously getting very high altmetrics values.

As a way to measure the impact of scientific work, the journal impact factor still has great value. Indeed, the true impact of science is measured by its influence on subsequent scholarship, not on how many times it gets mentioned on Entertainment Tonight or how many Facebook likes it gets in the Maldives.

It’s quite possible that some are supporting article-level metrics just because they want to undermine Thomson Reuters, the publisher of Journal Citation Reports, the product that includes impact factor information. Many also blindly support anything that’s new, regardless of how legitimate or enduring it may or may not be.

Many new bogus impact factors have been introduced lately, including the Global Impact Factor (GIF) and the Journal Impact Factor (JIF). More will likely appear. Bogus article-level metric products will certainly arise as well. Without rigorous vetting and quality control, no new scientific impact measure will be successful or valid.

Recently, I have noted the appearance of what I call “article promotion companies.” These are discipline-specific websites that spam the authors of scholarly articles with offers to promote their articles through the promotion companies’ websites. An example is the company Educational Researches. They generally charge $35 to promote a single article. Many email me asking about the ethics of these services. Certainly many more such services will appear if article-level metrics catch on.

Article-level metrics reflect a naïve view of the scholarly publishing world. The gold open-access model has introduced much corruption into the process of scholarly communication, so we should learn from this and avoid any system that is prone to gaming, corruption, and lack of transparency, such as article-level metrics.