More Questionable Articles from MDPI

Power to the people.

Power to the people.

Imagine you’re a rogue physicist and you’ve invented “a revolutionary new scientific breakthrough that will provide the world with a cheap abundant source of sustainable non-polluting energy. It requires no fuel, can be generated anywhere, is completely scalable and can be used to power microchips as well as homes.” Where do you go to publish your research? Elsevier? Wiley? Taylor & Francis? Nope, you go here:

MDPI. That’s right, you go to the MDPI journal Entropy, a pay-to-publish journal that specializes in publishing borderline papers other publishers won’t touch.

In this case, the fuel-free energy source is called “Entropic Energy” by George S. Levy, the author of two research articles and one “comment” (i.e., correction) in Entropy. San Diego-based Levy is, apparently, Entropic Energy’s inventor.

Levy’s website is called Entropic Power, and on the site’s “Publications” page, three publications are listed, including one apparently unpublished manuscript, and two articles from ­Entropy.

If you’ve made a major scientific discovery, you need to publish your findings to make your work credible to journalists and potential investors. Unfortunately, a strong peer review often prevents such “discoveries” from being reported in respected scholarly journals.

George S. Levy

George S. Levy

Therefore, the trick is to find a journal that looks scholarly but that will accept pretty much anything. Entropy fits the bill.

Levy’s website is here. Like many inventors, he lists on the website scholarly articles that purport to ground his discoveries in peer reviewed science.

However, because the only scholarly articles he listed are published in MDPI’s Entropy, doubt is cast, I think, on the purported discoveries.

18 Responses to More Questionable Articles from MDPI

  1. John Sloan says:

    From the MDPI website:

    MDPI Publication Ethics Statement
    MDPI is a member of the Committee on Publication Ethics (COPE). MDPI takes the responsibility to enforce a rigorous peer-review together with strict ethical policies and standards to ensure to add _high quality scientific works_ to the field of scholarly publication. Really? Why then, does Entropy charge authors ~ US $1,400 to publish these “high quality” articles like the one pointed out in the blog? Could it maybe be to _make money_? So much for ethics.

  2. Reblogged this on Blog do Pedlowski and commented:
    O professor Jeffrey Beall expõe mais um daqueles esquemas de produção de “trash science”. Cuidado com esse editor chamado “MPDI”!

  3. wkdawson says:

    There is a cited editor (Antonio M. Scarfone), perhaps he can explain.

    At any rate, the article itself makes none of the claims asserted above — those are on Levy’s web site. The article basically compares classical statistics with quantum (Fermi Dirac) statistics, which are different.

    Levy then proposes that the quantum system can be manipulated by the player by changing the particle’s elevation in the well. Whereas this might be so in principle, I don’t see how we can build an experiment where this can actually be done. On his web site, he seems to think that there is some way to really make the Maxwell Daemon work efficiently. Most likely, on average, it would take more energy to access the information about the quantum states, than the gain you might get from manipulating them. Nevertheless, the proposition “if the user manipulates it (or if the user can manipulate it)”, is not in of itself against the law, in principle.

    Hence, I do not see Entropy endorsing the claims on Levy’s web page. Rather, Levy is using the publications in Entropy to promote his (almost certain) wacky idea. This raises an interesting quandary.

    However, anyone could, in principle, use perfectly legitimate publications (even from Elsevier, Springer, Taylor &Francis, Wiley, etc.) as a means to promote any sort of nonsense. The connection could be next to nil, but that won’t stop them. It’s free speech (i.e., of a loose cannon). The argument that so-and-so has 100s of publications in X is typically used to justify that the person is a qualified authority to defend some pseudoscience Y. I can think of several people there who published in the so-called legitimate journals. No journal can take responsibility for the misuse of information by anyone on anything. They should, however, be more careful about publishing other papers by this author, since he has already shown that he is likely to misuse it. Moreover, “The author declares no conflicts of interest” is not being entirely truthful.

  4. Franck Vazquez says:

    This response is from Dr. Franck Vazquez, Chief Executive Officer of MDPI since 1st May 2016 in charge of science at MDPI since February 2015.
    This article has been reviewed by three physicists experts in thermodynamics, spins and Fermi particles and had receive very good ratings from these reviewers.
    This article has also been seen and accepted by two independent Academic Editors.
    This article does not highlight a new theory and does not claim to have done so. The author simply proposes a “quantum game” (a theoretical application) to test/use an already existing theory.

  5. Josef Trögl says:

    I have published a few papers with MDPI also (Sensors and Sustainability journals). I have also served as a reviewer (and received a discount for OA fee). In my opinion the peer-review was always standard. I even had a rejection in Sensors. My foreign colleagues (including such “aces” of science as Steven S. Ripp) consider Sensors journal highly and are not afraid of publishing with MDPI.
    I am not able to evaluate this paper, it is beyond the scope of my expertize. Nevertheless, also with experience as a associate editor for “standard” Springer journal (International Journal of Environmental Science and Technology), I can set at least two hypotheses why author might be interested in publishing with MDPI (and not mentioning “they publish everything”):
    1. The review process is fast. This is general not because of poor and too-fast reviews (MDPI requires the review to be finished in the same two weeks as our journal) but because of a professional editorial board, who takes care of submissions as a full-time job. On the other hand I work for university, teaching, resolving projects, supervising theses, managing department and as a volunteer bonus, usually in the evening or weekends, checking new submissions, inviting reviewers (of which cca 50% do not response at all and 30% decline to review), reading reviews etc. Typical submission is “sleeping” 1-2 weeks until editor-in-chief assigns it to associate editor (me), then another few days (sometimes even week) until I carry out preliminary check, review anti-plagiarism report etc., then another 1-2 weeks until at least two reviewers agree, the 2-3 weeks until reviews are finished, a few days before I check reviews and submit decision, and few days (sometimes up to two weeks) until editor-in-chief confirms it. In sum 2-5 months (often more) elapse until the first decision is reached. The professional editorial boards in MDPI reduces the idle (editorial) times to minimum and the decision is usually reached within month or two.
    2. If the author, unlike Mr Beall :-) is a fan of open-access (general readability is a significant advantage if you want to spread your results especially outside scientific community) than MDPI is definitively better choice than “gold” journals offering open-access option. Elsevier, Springer, T & F and other big publishing houses (and also highly profitable, among others because of “paraziting” on volunteer work of us editors and reviewers) offer the OA option for 2000 EUR / 3000 USD (and for those OA papers the publisher receives the money twice, i.e. from authors as well as from subscribers, who subscribe entire journals or journal sets, not single papers). MDPI OA fees are much cheaper, even for journals indexed in WOS or SCOPUS.

    By the way I suggest to check the quality of Springer-Open journal. Their editorial policy is such that novelty is no considered if the text is sound.

    • wkdawson says:

      I would say that what you want are competent, honest reviews that help you write up a good research paper and an appropriate place that fits the research you do.

      As a guess editor for the journal, I’d say that reviewers were no worse than ones I have encountered in some of the so-called high impact journals. In fact, because some of the manuscripts they receive are of lower quality when they first arrive, I would say that half the reviewers are actually better.

      The presentation Levy uses (in the papers) is not in of itself objectionable. However, he uses this research work to propose things that are unlikely to be true.

      This is where the conundrum actually comes in on this matter. If a person submits a perfectly respectably paper, yet also believes in flying saucers (for example), what should one do? Certainly if the person cites legitimate research as support for the flying saucer nonsense, there is evidence of using the research to mislead. In this case some recourse might be possible. However, what if the person appears to keep these matters separate? It would be unethical to refuse legitimate scientific work from someone, simply because you don’t like him/her or some of his/her ideas. (Wackiness is somewhat of a relative matter.) I think there can be _some_ exceptions, but basically one’s duty as a scientist is to evaluate and uphold the standards of the science.

    • Josef Trögl says:

      I would like add two more notes:
      1) I am “fan of OA”. Especially my experiences with problematic reaching of scientific literature during studies lead my to this opinion.
      2) The rejection in Sensors was directly by editor and due to “insufficient novelty”. A good point was that it took two days only to reach this decision and we could search for another journal.

  6. Anonymous says:

    Someone recently sent me the following, which appeared in MDPI’s new “ncRNA” journal:

    When one of my colleagues read it, he immediately asked, “Did you get this from Beall’s list?”

    This is the most ridiculous entry I’ve read in a field with fierce competition for bad science: people claiming that one of the most labile biomolecules in food, RNA, survives the digestive tract and regulates genes to affect human health. There are no known mechanisms for how this would happen, nor does the stoichiometry make sense, and it’s been disproven by numerous groups.

    The MDPI paper takes the game to a new low, claiming that a particular miRNA, miR-150, comes from the diet, is transferred by viruses and plants, and causes cancer. It’s hard to follow the logic, but they suggest miR-150 can be transferred from people to plants by a bacterium, and back again via the diet. To take the cake, the miRNA is said to be sexually transmitted, then transferred to the offspring, where it causes prostate cancer. Should we screen all prospective fathers for miR-150? If any of this happened, there would surely be a shred of genomic evidence, but there’s none. It’s pure nonsense.

    Sadly, a highly cited academic is at the helm of this journal and oversaw the “review” process. He has apparently convinced several respected scientists to sully their names with it.

    • wkdawson says:

      Hmm…, it is possible to horizontally transfer genetic material via plasmids (i.e., DNA) inside the gut ( ), but RNA is a bit surprising.

      At any rate, whether true or false or some mixture of two, it won’t be the first time that nonsense has been preserved, sanctioned and promoted by powerful people in the RNA world. It simply doesn’t depend on the journals.

  7. david says:

    So is MDPI in or out? I get asked all the time to review papers for Forests (i’m an ecologist). When asked to review articles, I always do two things before even checking whether the subject matter fits my expertise: I come to Beall’s list to make sure the journal/publisher is NOT listed, and I go to Web of Science to make sure the journal IS indexed (as they have standards for whether they’ll index a journal’s articles and give the journal an impact factor and other stats). Based on those two criteria, MDPI journals seem to make the grade and be legit. But then I read stuff like this post, which makes me think they’re not legit (however, this post lacks a date and author…I assume it’s by Beall, and he doesn’t have guest posters).

    • I removed MDPI from my list in October, 2015.
      Yes, I wrote it; the date is at the bottom and hard to see, a feature of the blog platform.
      I have had several guest posts over the past four years and eight months.

      • david says:

        Okay. Thanks for that information (and thanks for the service you provide the scientific community). MDPI’s legitimacy just gets confusing when I see you delisting them, and WoS indexing them, yet there are articles all over the place trashing them. I’m happy to review a Forests article, so as to give the authors feedback, but it seems a waste of my time if the editors won’t hold the authors feet to the fire and require them to respond to reviewer comments.

      • Laco says:

        “MDPI chiefly serves as a place to publish manuscripts that were rejected from stronger publishers”
        Well, per se a valuable undertaking :)
        Because, who has nowadays time to send an article there and back again for months, possibly for years? Especially in a situation where you are in dire need of publishing (e.g., as a prerequisite for PhD graduation). Does it necessarily mean the content & form of the publication are altogether poor quality?
        I don’t think so.
        As for myself, I have never published anything in OA journals, but currently I am considering to send an article to an MDPI journal, quite an established one. I would never, of course, send it to a really fraudulent journal.
        The article itself is no “high science” but a solid average. A “good article of trade”, one might say. If I had time (half a year at least, let’s say) I would probably be able to place it in an average journal with a lower impact factor (and save my institution’s money). But since my PhD student who is the principal contributor to this article must submit his PhD thesis (together with 2 research articles in impacted journals) by the end of March, this would be too risky as he could be expelled for exceeding his study time and lose 5 years of work for nothing. The rules here (an Eastern EU state) are set like this and even if I consider them ridiculous, I cannot change them.
        So what to do? The “professional prestige” is not the only thing to be taken into account.

      • So. for you and your student, MDPI is more like a repository than a scholarly publisher, no?

        I predict that the peer review reports you receive from MDPI will ask you to revise the paper to cite earlier papers published in MDPI journals.

      • Laco says:

        In some sense, maybe… But I would not say ONLY a repository. Of course I want to communicate the results of my work as well. The article, as I’ve mentioned, is not really bad, it does report solid research, but not “sensational breakthroughs”. Alas, the latter are more in demand by the journals, quite understandably, as they help to boost their impact factors. Out research, for the most part, falls into the category of “incremental research”, as they call it. Nevertheless, I don’t think it’s useless or “bad science”. I am convinced this type of research is necessary to achieve any real breakthroughs in the future and by far the largest part of the scientific knowledge of the mankind consists exactly of such contributions.
        At the very least, it does not provide fake results, as many of the articles today do, especcially from certain countries which I am not going to name here. Let’s say, when I see many of the articles publishing results on bioactivity of new compounds, I can see immediately there is something fishy about them. You don’t need to check the measurements experimentally, it is sufficient to work with the same category of substances long enough. I might of course be mistaken in some cases and the results just might be genuine, only why do they come always from the same (few) countries?
        Why am I almost never able to reproduce the results on bioactivity even when I try? Even in the rare cases when I am able to reproduced the synthesis of the target compounds. Am I really so bad at this?
        What I want to say is: people which have no moral or professional restraints to “embellish” their newly prepared compounds with great biological/pharmacological activities get a kind of “comparative advantage”, which makes it for the others even more difficult to push through their research (in consequence discredited as “incremental research”).
        In my opinion, the main problem is the recent flood of low quality research, or even fake research, especially from “threshold countries”. And where there is demand there is also supply. You cannot check the research results “by hand”. Even the best peer-review process examines basically only the more formal criteria, in reality it is almost impossible to check the genuinity of the submitted work, not in these numbers.

      • wkdawson says:

        I would narrow the considerations to the following questions. (1) is the work potentially useful information for other people to use (i.e., should it be published somewhere because it is _useful_ information), and (2) is the work done in good faith. It seems from your comments that your answer to both those questions is “yes”.

        The MDPI journals are peer-reviewed, a repository is not. Moreover, in my experience with peer review, one of the best peer review jobs I had came from an obscure OA journal and one of the poorest peer review jobs I had came from a subscription access (SA) journal. The rest were evenly spread. What you most want is quality feedback from the peer reviewers. Whether OA or SA, you can find reviewers who are politicians or don’t take their job seriously. So, it is the luck of the draw on that.

        There is plenty of research that would be good to know, but is difficult to find. Perhaps this is because of the attitudes like insisting that only the thing that are purportedly “new” are worthy of publication in SA journals. It surely encourages people to hype their work with lots of balderdash rather than just write what they did and what they found and let other people decide if it is worth anything. As a researcher, there have been many times that I needed information and could not find it anywhere. Is it because of reasons that you cite, that some editor or reviewer considered it “incremental” or “insignificant”. Should _useful_ research be wasted like that?

        It is also important to reflect that research is also sometime the luck of the draw. Sometimes a project turns out to have low lying fruit that quickly yields a publication in PNAS, but it can also be largely a project that yields thistles and thorns even when it seemed like it was a good project at the inception. If the student has the tenacity to stick to the work diligently for 5 years and managed to get something useful from it, it actually may say a lot more about the student than the person who was fortunate enough to catch the low lying fruit the first time. Should the reward go only to those who get lucky and are blessed, or should it also go to those who are persistent and tenacious? Over all, good science is generally the product of persistence and tenacity and lucky breaks are not to be expected in my opinion. You might find the book “Fooled by Randomness” a useful perspective here.

        The remaining issue is the journals themselves. Obviously, it is better not to sell yourself short. I don’t know what to recommend here. Career wise, getting lucky the first time opens lots of doors, but a career is a life long journey, and I would say that what is more important is what you actually _do? in that life. I think that it is unfortunate that administrators base their decisions on trivial metrics like impact factor. Some of the greatest works were published in obscure journals, and even Einstein, the icon of great discoveries, published his work at a time when the Annals of Physics did not even have any peer review. Does that make Annals of Physics a “repository”? Work should be judged based on reading the papers, not these false measuring scales.

Leave a Reply -- All comments are subject to moderation, including removal.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: