Just the Tip of the Iceberg | Peer to Peer Review

Gibberish is defined by Merriam-Webster as “foolish, confused, or meaningless words,” and the example sentences that are given suggest that gibberish is produced by people who are overexcited or unconscious. But it turns out that computers can also write gibberish, if they are programmed to do so, as we all found out when it was announced that publishers Springer and IEEE were withdrawing 120 papers they had published because they were found to be such computer-generated gibberish.
Kevin L. SmithGibberish is defined by Merriam-Webster as “foolish, confused, or meaningless words,” and the example sentences that are given suggest that gibberish is produced by people who are overexcited or unconscious. But it turns out that computers can also write gibberish, if they are programmed to do so, as we all found out when it was announced that publishers Springer and IEEE were withdrawing 120 papers they had published because they were found to be such computer-generated gibberish. The story broke in February in an article in Nature, which explained that the articles had apparently been “written” by software called SCIgen, “which randomly combines strings of words to produce fake computer-science papers.” A French researcher named Cyril Labbé subsequently wrote a program designed to detect SCIgen-produced papers and ran it against some databases of scientific publications. He found a significant number of these phony articles in two commercial databases.  Apparently all of the papers were in conference proceedings that were identified by the publishers as peer-reviewed. At least one scholar listed as an author on a paper said that his name was used without his knowledge. The parallels between this situation and the “sting” about peer-review in open access journals that was published by Science last year make it inevitable that the usual suspects would line up to make markedly different assertions about the Labbé study. Those who tend to defend traditional publishers point out that the papers were in conference proceedings and that publishers typically have less control over those types of publications than they do over journals. Advocates for newer models of publication—and I guess I am one of these “usual suspects”—point out that what is sauce for the open access goose is sauce for the toll-access gander. I hope to make a larger point about both studies before I am through, but let me first offer a series of observations that I hope will help with context for the debate. First, the two publishers “caught” in this study, Springer and IEEE, both issued statements about the situation. Springer promised better procedures in the future, cited the large volume of papers they handle each year, and hinted that the real problem was the person or persons who submitted the papers in the first place (they refer to it as “fraud”). Weak as this is, it is better, in my opinion, than IEEE’s statement, which struck me, with its assertion that it followed “strict governance procedures,” bragging about its “129-year heritage” and claim it was “taking steps,” as little more than puffery and denial. Second, it is worth noting that there is a difference between these SCIgen papers and the pseudo-science article that John Bohanan designed for use in his open-access sting. Bohanan crafted a paper that was superficially plausible but that contained, he said, scientific errors that should have been caught by peer-review. The SCIgen papers, on the other hand, literally do not make sense; the words are connected grammatically but not logically, so that any competent reader should know she is not reading anything with substance. I, for example, would probably read the Bohanan papers without seeing anything wrong, scientifically ignorant as I am; they were intentionally designed to fool people.  But I can still tell quite easily that the SCIgen papers are phony; they reveal themselves to any sensible reader. What this suggests is that, whether or not the journals are open or closed access, the SCIgen research exposes an even deeper problem with peer-review. These papers didn’t slip past inattentive peer-reviewers; they may have not been reviewed at all, in spite of the publishers’ claims. Third, we should address the fact that the papers were all found in conference proceedings. I have no doubt that publishers have less control over conference proceedings than they do over other content that they publish; they may simply rely on the organizers of the specific conferences to have made quality judgments. But that fact alone does not mitigate the fact that a huge failure has occurred here, and it is a failure that appears systemic. From the perspective of the libraries that purchase these collections, we should remember a couple of things. First, these items were marked as peer-reviewed, which means that a common marker that we use to teach students about scholarly sources as well as to evaluate faculty is unreliable. Second, the reason there are so many conference proceedings included in these databases is because publishers use the ever-growing number of papers they publish to market their products and justify their price increases. There is a financial and marketing incentive to include more and more stuff, and strict oversight of peer-review in that circumstance is economically counter-productive. Here is where I think these two studies that document the failures of peer-review come together. Both show us that the economic forces behind publishing, both traditional and author-fee Gold OA, are at odds with peer-review. Or, more accurately, that there is no hard line, either based on business model or publisher reputation between journals that practice good peer-review and those where bad peer-review exists. No journal and no publisher can assure us that it always presents exclusively well-reviewed material. The rule here for libraries is caveat emptor—let the buyer beware—and for the academy as a whole, the lesson is that we need to stop putting too much weight or reliance on the assertion that a particular journal is peer-reviewed. This is why I have called these studies the tip of an iceberg. They have shown us, in my opinion, that peer-review has a systemic problem. When a journal calls itself peer-reviewed or a database marks some sources as “scholarly,” we actually know nothing about what that means for individual articles.  And we should remember that the salesperson who tells us how many peer-reviewed articles are in his database and then justifies the price increase by the growing number of articles is contradicting himself, since the increasing number of papers, as even Springer seems to admit, cuts against quality peer-review. I have no doubt that we have seen just a small part of systemic and growing failure of peer-review, at least as a label and as a marketing tool. In spite of all this, however, peer-review is a vital part of the scholarly communications system, as our faculty authors tell us over and over. What we need to do is not give up on peer-review, but to recognize what it is good for and take steps to improve it where it is weak. As far as the value of peer-review is concerned, it is virtually entirely located at the point where reviews are communicated to authors and assist in the improvement of each individual scholarly article or book.  What peer-review is good for is helping authors see places for improvement or for further research. The reason we need peer-review, in short, is to promote dialogue amongst scholars and make research better. But that value varies from article to article; it depends on the work of each specific reviewer and is captured, in its entirety, in the response of each author. The flip side of this statement about the value of peer-review is that it does not have much value once we move beyond the work of the individual authors. Both of these recent exposés show us why assertions about peer-review should not be trusted as markers for the quality of journal titles; at that point, such assertions are little more than marketing rhetoric. Finally, the way to improve peer-review is to make it more transparent. If readers of scholarly articles can see the interactions between reviewers and authors, more of the value of that exchange will be passed on. In a presentation earlier this month at the SPARC Open Access meeting, neuroscience researcher Erin McKiernan talked about how the journal PeerJ provides a higher value peer-review, in her opinion, because it is done in the open; the whole history of comment and response is available for readers to see. Beyond the rather silly debates between partisans of different “stings,” this is the real advantage of open-access.  It can move us beyond the mysterious “black box” that peer-review becomes at the level of journal titles and show us how it has actually made research better, one article at a time.
Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.


RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?