Nautilus Members enjoy an ad-free experience. or Join now .

Hoss Cartwright, a former editor of the International Journal of Agricultural Innovations and Research, had a good excuse for missing the 5th World Congress on Virology last year: He doesn’t exist. Burkhard Morgenstern, a professor of bioinformatics at the University of Göttingen, dreamt him up, and built a nice little scientific career for him. He wrote Cartwright a Curriculum Vitae, describing his doctorate in Studies of Dunnowhat, his rigorous postdoctoral work at Some Shitty Place in the Middle of Nowhere, and his experience as Senior Cattle Manager at the Ponderosa Institute for Bovine Research. Cartwright never published a single research paper, but he was appointed to the editorial boards of five journals. Apparently, no one involved in the application processes remembered the television show Bonanza, or the giant but amiable cowboy named “Hoss” who was played by actor Dan Blocker. Despite Cartwright’s questionable credentials, he was invited to speak at several meetings such as the 5th World Congress on Virology—typically a mark of recognition as an expert.

Morgenstern was tired of the constant barrage of solicitations from suspect science journals asking him to join their editorial boards—the academic equivalent of the flood of credit card applications that anyone with a mailbox receives. “At some point I was just so fed up with all those spam emails from these junk publishers that I just did this little experiment,” he says. “I contacted them under the fake name Peter Uhnemann and asked to be accepted on the editorial board.” Uhnemann was a name borrowed from a German satirical magazine and Morgenstern’s first alter ego.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Uhnemann immediately joined the masthead of the journal Molecular Biology, which belongs to the publishing house OMICS International—which in August was sued by the U.S. Federal Trade Commission for deceptive practices—and is produced “in association” with the Nigerian Society of Biochemistry and Molecular Biology. Unfortunately, Morgenstern admits, he was a bit too subtle: “Hardly anybody knows the name ‘Peter Uhnemann,’ so I then tried it with a more popular name, and this happened to be Hoss Cartwright.”

nkbimages & Caiaimage/Martin Barraud / Getty
Nautilus Members enjoy an ad-free experience. Log in or Join now .

He has also found work for Borat Sagdiyev, the character created by comedian Sacha Baron Cohen. Borat is better known by his first name and less well known as a senior investigator at the University of Kazakhstan, who is still on the editorial board of at least one journal, Immunology and Vaccines. That journal belongs to Academician’s Research Center, a publisher based in India that’s suspected of “predatory” behavior against scientists desperate to see their work in a journal no matter how obscure or unread. (We emailed ARC about its quality control efforts, or lack thereof, but haven’t heard back from them.)

Cartwright, Uhnemann, Borat, and others are, in some sense, sting operations built to expose the growing practice of gaming the metrics by which scientific publications are judged. The number of publications a scholar has, how many times they have been cited, who the co-authors are—metrics like these should all be secondary to the quality of the work itself, but often they are actually more important.

“Scientists no longer publish to share results with their colleagues, but rather to improve their ‘metrics,’ ” laments Morgenstern. These metrics can have real impact on scientists’ careers.

There is a tiny but growing horde of scientists and collaborators who are figments of someone’s imagination.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Edward Calabrese, a toxicologist at the University of Massachusetts Amherst, has sat on committees tasked with hiring and promoting faculty, and he sees signs of vulnerability in the process. “Committees are somewhat to very self serving and tend to lower the bar based on personal relationships with colleagues,” Calabrese says. “For the most part I doubt that they are very alert to being manipulated and can therefore be easy targets. … In most departments I think it is likely that the faculty may not even evaluate the quality of the papers, giving up their judgment to journals, peer review processes, and the letters of external reviewers,” he adds. “It is easier to use these means for decision making.”

And that’s in the United States. The Medical Council of India recently updated its guidelines to require publication of four papers to become associate professor, and eight to become a full professor. The policy has triggered fears among some scientists that the quality of Indian research will fall as people try to pad their resumes with bogus or crummy papers.

The fact is that professional advancement for scientists around the world is becoming more and more challenging in an era of ever-scarcer funding for research and tightening competition for faculty spots. To succeed in the publish-or-perish environment of academia, most scientists hit the lab and play within the rules. Others, though, hatch schemes.

The nuclear option is faking data. But the complexity of the modern scientific publishing environment has opened a host of new, more sophisticated approaches: fluffing up resumes with scam appointments to editorial boards, adding nonexistent authors to studies (or real, high-powered co-authors who didn’t participate in the research), and even publishing junk journal articles for the sake of publication count.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

But, one of today’s most direct new frauds is incredibly simple: Make up new people.

Jesus Angel Lemus is a Spanish veterinary researcher who has lost 13 papers to retraction over concerns about the veracity of his data. That part’s not so unusual—even 13 retractions doesn’t put Lemus among the top 30 researchers for retractions. What makes Lemus interesting is that he appears to have created a fictional co-author for five of his articles, one “Javier Grande” (big Xavier, whose vague affiliations, ironically enough, made him a big man on campus at the University of Castilla-La Mancha). It’s difficult to understand why, although bulking up author lists is one way to increase the apparent credibility of a study, particularly if they’re from a prestigious—or prestigious-sounding—institution.

That was the experience of William G. Hoover in the late 1980s. Hoover, a physicist at Lawrence Livermore National Lab, had tried unsuccessfully to get a paper published in two leading journals. So he added a co-author from a prestigious-sounding institute, the Institute for Advanced Studies at Palermo, Sicily, and resubmitted the work. Sure enough, the paper was accepted and published. He did this several times with the same result. But the name Hoover chose—Stronzo Bestiale—was a sly tell: In Italian, it means “giant asshole.” And yet Bestiale remains in the scientific literature, just like Hoss Cartwright. So does Galadriel Mirkwood, an Afghan hound that belonged to biologist Polly Matzinger of the National Institutes of Health. She was fed up with the use of passive voice in scientific papers, and decided to add her pup’s name to a paper in protest.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

It’s less easy to wrap one’s mind around what happened to Bruce Speigelman, a cancer biologist at Harvard who in 2013 noticed a paper with a curious feature: Although he didn’t recognize any of the authors on the article, the data looked more than a bit familiar. Indeed, they were his findings, which he had presented at meetings and had been in the process of writing up for publication. Speigelman objected to the journal, Biochemical and Biophysical Research Communications (BBRC), which agreed to withdraw the paper.

As Nature reported at the time, the journal discovered that the group of authors, from the University of Thessaly, appeared to be completely made up—indeed, none had published a scientific paper before—leading the editors to issue the following notice: “BBRC has been targeted by a scheme to defraud our editors, reviewers and readers with submission of a manuscript with falsified author and institutional information and therefore wholly unverifiable scientific claims.”

Identity fraud in science will likely be as hard to stamp out as it is in other walks of life. What that means, at least for the time being, is that there is a tiny but growing horde of scientists and collaborators who are figments of someone’s imagination.

Sigrid Gombert & timeless / Getty
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Another emerging channel of scientific fraud is to interfere with the process of peer review, through which new scientific work is evaluated by informed members of the community before it is published. Hyung-In Moon, a medicinal-plant researcher formerly at Dongguk University in Gyeongju, South Korea admitted in 2012 that he had been reviewing his own papers by exploiting the online system that allowed him to recommend reviewers. He simply recommended himself. Sometimes the names he used were made up, and sometimes they were real scientists. In all cases, the email addresses he provided came back to him. Journals have retracted 28 papers in the scandal, and one editor resigned because the episode depressed him so much.

More recently, the publisher SAGE, based in Thousand Oaks, California, found itself investigating 130 email accounts. In 2013 it uncovered evidence that its peer-review process had been compromised: Reviews that researchers say take a half-day’s work, on average had come back within minutes of being assigned. The ringleader of the group turned out to be a Taiwanese engineer named Peter Chen, whose name appeared on 60 articles that SAGE eventually decided to retract. The scandal caused Taiwan’s education minister, who had co-authored some of the papers, to resign as well. At the time of this writing, some 350 papers had been retracted for bogus peer review since 2012.

Unless the evaluation of scientists can be wrenched away from bean-counting metrics, history is likely to repeat itself.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

If making up people or corrupting peer review seem too onerous, a scientist can simply publish a paper in a journal nobody reads. The emergence of a new business model in scientific publishing—coupled with the insatiable imperative to publish one’s work—has opened the floodgates for such outlets. While typically readers—through universities—paid subscriptions to support publishing, today, a number of publishers charge authors for the privilege, which allows them to make papers freely available to all readers. While most journals using this model are legitimate, a small but growing cohort have lowered their standards in order to publish (and charge). These predatory journals, as University of Colorado, Denver librarian Jeffrey Beall has demonstrated, want to appear to be real journals, with rigorous peer review, so that they can collect payments from authors who publish in them, who in turn can point to long publication records. Some scientists are swindled by predatory journals. Others, though, publish in them precisely because they are scarcely read, notes Mario Biagioli, a professor of law and of science and technology studies at the University of California, Davis.

Finally, the softest category of metric gaming is citation stuffing. This practice can be applied even to quality science published by legitimate scientists in top-quality journals. Being able to refer to your papers as “highly cited”—an official designation from indexers such as Clarivate Analytics’ Web of Science, until recently part of Thomson Scientific—is a big CV booster. Many scientists list their “h-index” (a measure of how many times their work has been cited) prominently on their CV or homepage. Biagioli warns that such metrics have become a version of Goodheart’s Law: “When a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it.”

Here’s how “citation stuffing” rings work: If I cite you in exchange for citing me, someone—perhaps a reader, perhaps a machine—will pick up on that fairly easily. But if I create a ring of authors, and I agree to cite you, then you agree to cite Professor B, and then she agrees to cite Professor C, and on down the line until Professor Z cites me, it’s much more difficult to detect. And then there are journals that ask authors to cite previous papers in their issues, to bulk up their metrics, much of which are based on citations. Every year, Thomson Reuters has delisted a number of journals—a serious punishment—for excessive self-citation. Biagioli notes in a recent article in Nature that “A curious feature of this kind of misconduct is that the work itself—the science reported in the paper—is usually not in question. Those responsible for this kind of post-production misconduct seek to extract value not from the article itself, but from its citations.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

It’s tempting to laugh off some of these antics, which seem driven by ego and self-interest. But they also underscore a painful truth: Unless the evaluation of scientists—and the all-important doling out of funding—can be wrenched away from bean-counting metrics, history is likely to repeat itself. Tomorrow’s metrics gamers may come up with some other ruse, and spoofers like Morgenstern will invent the next Hoss Cartwright in response. Taking time to read and evaluate a selection of a job applicant’s papers takes far more time than plugging a bunch of numbers in to a matrix. But it’s precisely that output, not metrics, that science is supposed to be about. The agencies that fund grants and committees that hire and promote academic researchers need to get back to doing the hard job of assessing the value and quality of candidates’ scientific work rather than leaning on the crutch of overly simplified publication metrics.

Adam Marcus, the managing editor of Gastroenterology & Endoscopy News, and Ivan Oransky, global editorial director of MedPage Today, are the co-founders of Retraction Watch, generously supported by the John D. and Catherine T. MacArthur Foundation.

Original images in lead photocollage from Aoi Igarashi/EyeEm & Tom Werner

Nautilus Members enjoy an ad-free experience. Log in or Join now .
close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.