Immanuel Kant famously argued that lying is a parasite on truth-telling. But parasites must be self-limiting: Killing your host is an evolutionary dead-end, so a successful parasite must keep from pushing its growth to the utmost.
Lying, then, is often caught in a balance between too little and too much. Fraud and fakery limit themselves, and nothing-but-the-truth is something nobody wants.
Here are five examples of faking that isn’t too big, and isn’t too small, but just right.
Fake purses and saints’ bones
Holy relics were one of the most important trade items during the Middle Ages in Europe—mainly fragments of the bodies of saints and martyrs, but also objects connected with the life of Jesus or the apostles, such as fragments of the True Cross, Mary’s robe, and even one of the baby Jesus’ diapers (which is still to be seen on special occasions in the German city of Aachen). A vast network of traders and thieves supplied the needs of cathedrals, churches, and private collectors, and some historians have suggested that competition for control of the relics trade was a primary motivation for the Crusades.1 The relics market thrived despite the fact, which must have been obvious at the very least on numerical grounds, that many of the goods for sale were fraudulent. At one point 21 different churches claimed to have the Holy Foreskin, while the quantity of nails from the true cross that Mark Twain encountered on his European travels amounted to “a keg” all told. “And as for bones of St. Denis,” he said, “I feel certain we have seen enough of them to duplicate him if necessary.”
The relics trade has some analogies to our modern trade in luxury goods. These are coveted for their associations and provenance, while being perhaps physically indistinguishable from much cheaper counterfeits. And here, too, we have thriving trade despite a substantial proportion of fraud. Is this because zero counterfeiting is not, in fact, the optimum, even from the perspective of authorized producers?
The availability of counterfeits can allow communities where few can afford the genuine items to remain aspirationally connected to brands that they might be able to purchase in the future.2 Consumer trend forecaster Jaana Jatyri put it this way: “Many luxury brands consider counterfeiting a form of viral marketing.”
A more intriguing theory, put forward in a detailed model by the economist Jen-Te Yao, shows that the “Veblen effect”—the inclination of consumers to value items more highly when the price is higher—allows producers of genuine goods to benefit from an enforcement regime that is sufficiently stringent to impose high costs on the counterfeiters, but not so stringent as to completely eliminate them. The availability of counterfeits increases the willingness of choosy consumers—“snobs” in the professional jargon—to pay high prices for distinctively “genuine” luxury goods.
Studies of consumer opinion have confirmed that the availability of counterfeits raises the price that people consider appropriate for the genuine goods. For example, in one study participants were shown a Louis Vuitton bag and told that counterfeits were common (“high-counterfeit”) or uncommon (“low-counterfeit”). The high-counterfeit group estimated the price of the genuine bag as more than twice as high, relative to the fake bag, as did the low-counterfeit group.
If religious relics operated according to the same principle, then we might surmise that the relevant authorities could have let the occasional fake scroll through on purpose. Whatever the case may have been, the punishment was more severe: Whereas modern authorities sometimes have fake product burned in exemplary bonfires, forgers of medieval relics could have found themselves being burned along with their wares.
The Ivy League
There is a tier of elite universities, a degree from which is seen as a huge boost to a graduate’s life and career prospects. There are gaps, however, between the credential and its implied ability, which inevitably create openings for fraud: Think of the massive online market for student essays, or the Long Island teenagers who were arrested for taking college admissions exams (SAT and ACT) under the false identities of contemporaries with more money and less brains. The motivations here are straightforward—neither the buyers nor the sellers have any personal responsibility for the overall credibility of the system.
Imagine, though, that you are the vice president for admissions and finances at the elite Ivyton University. You are committed to the institution but entirely cynical. Children of wealthy donors and potential donors, some of them alumni, are lining up to beg admission. You could fill your class with them and guarantee a huge financial windfall for the institution. There is strong evidence that the success of elite university graduates is largely a selection effect—Ivyton graduates are successful because they are the sort of people that Ivyton admits. In which case, the value of placing the dim scion of wealth in the Ivyton freshman class is a lifetime of being mistaken for his or her brilliant classmates. (Perhaps augmented by the benefit of years spent bonding over a joint with the nation’s future business and technology leaders.)
This is extremely valuable, worth a lavish contribution, for the first wealthy cuckoo hatched in the Ivyton freshman nest. Each additional “development case” lowers the value of the swindle, since it lowers the overall intellectual prowess of the class they are aiming to be confused with. People get used to asking the question, are you one of those Ivyton graduates? At some point, the marginal cost to reputation (from your perspective as VP admissions) outweighs the cash you might earn. A secret economic analysis may somewhere have advised an admissions office on the precise optimum point, though one might hope that some loftier—or at least longer-term—considerations would have outweighed the cynic’s equilibrium.
Just to illustrate how this could work out, let S be the average salary of Ivyton-calibre graduates of other universities, and SI the average salary of those with the Ivyton diploma. We might suppose that if an average teenager manages to slip into Ivyton, he or she might get the same salary boost, so get an effective bonus of SI–S. If the fraction of phonies is p, this will reduce the perceived quality difference, and hence the bonus, to (1-p)a×(SI–S), where a is a parameter (of the flavor you might see in any real statistical model) that quantifies the extent to which people recognize the presence of phonies at Ivyton and begin to question the quality of the rest of its student body. The bigger a is, the more each phony in the population exposes all the others, and the lower the school’s credibility becomes. If a is bigger than 1, it means that the dilution of the freshman cohort quality becomes harder to ignore as the numbers rise, which is what we would expect.
The total bonus—that these otherwise average students may be willing to pay for—is then Np(1-p)a×(SI–S), where N is the size of the class. This is maximized when p=1/(1+a). Looking at Harvard, just to pick a real university at random, about one-eighth of Harvard students are legacies—children of Harvard graduates—who are admitted at five times the rate of other applicants.3 If these make up about half of the phony admissions, we can estimate p=0.25, or about 400 students a year, so a might be about four. Multiplied by the Harvard bonus (SI–S)—one estimate put this at $13,000 a year—this yields a substantial pile of gratitude to share with alma mater.
Butterflies and bacteria
In 1861, barely two years after the publication of Charles Darwin’s great treatise, naturalist and explorer Henry Walter Bates presented a stunning paper to the Linnaean Society in London. He had resolved an at-first baffling problem that presented itself over his 11 years of minute study of insects in the Amazon. Again and again he had found that his attempts to classify butterflies by their wing patterns would be undermined by the presence of rare individuals that seemed superficially identical, but were revealed by their less conspicuous features to be in fact variants of entirely unrelated species. The more numerous species, he recognized, had some protective chemical properties that made it inedible to major predators. The rarer species was then a mimic, protected from predators who were smart enough to have learned to associate the wing pattern with a bad taste or subsequent illness.
Bates and Darwin both recognized this as exactly the kind of example that Darwin’s theory of evolution by natural selection was missing: Clearly the edible species was being selected by the birds, who were gradually plucking out the individuals who didn’t resemble the inedible species. The survivors passed on their protective coloration to their offspring, some of whom would resemble the inedible species even more closely. In the 20th century this “Batesian mimicry” was studied quantitatively, first by the great statistician and evolutionary theorist R.A. Fisher. Fisher saw it as a paradigm of density-dependent selection—evolution of traits whose fitness varies depending on how common they are in the population. After all, he argued, the more common the mimics become, the more chances the predators will have to learn that butterflies with this distinctive pattern don’t taste so bad after all. The mimics will be limited in population, depending largely on how noxious their model is, and how adventurous the birds.
A similar dynamic plays out at much smaller scale in the struggle between the human immune system and microbial invaders. This is the process known as molecular mimicry. Numerous infectious agents have been found to express proteins similar enough to human proteins that they may be fooling the immune system into treating them as unthreatening human cells; others produce mimics of specific immune-suppressant proteins. Given that human-mimic proteins are so obviously useful to any ambitious human parasite, why don’t they all have them? One of the pioneers of molecular mimicry in the 1960s, Raymond Damian, proposed that parasites with human proteins, like the butterflies disguised to look like poisonous species, risk teaching the immune system that what look like human cells are actually tasty parasites. The consequence is an auto-immune disease that disables or kills the human, and so limits the parasite’s growth potential. Indeed, infection by molecular-mimic-bearing pathogens is now known to be an important mechanism triggering auto-immunity.
Vietnam-era United States defense secretary Robert McNamara is credited with the interview advice “Never answer the question that is asked of you. Answer the question that you wish had been asked of you.” Interviewers put up with this evasion because it seems like the best way to get an elusive subject to say anything useful. It turns out that this principle may be useful even for getting useful responses to yes/no questions. Where privacy and potential embarrassment are concerns, the best way to elicit the truth can be to let the interviewee fake some responses by answering an alternative question.
Evasive answer bias is a key problem in social research. If you ask people “have you ever cheated on your spouse?” a substantial fraction of those who should answer yes will say no out of embarrassment, no matter how earnestly you promise anonymity. Half a century ago, the economist Stanley L. Warner proposed a simple solution: Give subjects a spinner with part of the field blue and the other part green. Tell them to spin it secretly, and to answer the question “Have you ever cheated on your spouse?” if they get green, and “Have you never cheated on your spouse?” if they get blue.
Assuming people are generally inclined to tell the truth, this procedure protects them from the embarrassment of answering yes: There is literally no way for anyone to know which question any particular person is answering. Overall, though, researchers can still get an accurate estimate of the fraction of subjects who say they have cheated on their spouse: If the fraction answering yes is Q and the green fraction of the spinner field is P, then a bit of algebra lets you estimate the fraction who say they have cheated on their spouse as (Q+P-1)/(2P-1). (This estimated fraction could come out negative, which is inconvenient, but it will be right on average.)
The question then becomes how much of the spinner field should be green. Effectively, a fraction 2(1-P) of the answers are wasted, so as P gets closer to 1/2, the precision of your estimate gets smaller, or the number of people you need to ask gets larger. But the closer P gets to 1, the more people feel like they are being asked point-blank whether they’ve cheated, and the more motivated they are to lie. The optimal tradeoff is somewhere in the middle, and it depends on exactly how many subjects are available and how much protection they need.
Privacy and false data
Fake yes/no questions are all well and good, but they don’t cover all of the privacy problems of modern data-rich social research, not to mention the terabytes of personal data that governments and private companies amass daily. Researchers need a way to guarantee subjects’ privacy when data are shared.
To this end, elaborate schemes have been developed for creating versions of complicated databases that are fake in all their details, but genuine in a wide range of large-scale properties that researchers may wish to investigate, including those that were not thought of by the creators of the synthetic data. The “fake-ness” of the data in these synthetic databases is defined by a property called differential privacy, which states that there is no question someone can ask about the whole database without your line of data that produces a substantially different answer than they would get with your information.
Of course, privacy is best served by a completely fake database. The idea of differential privacy is to define the extent to which a synthetic database derived from sensitive data—and still containing the required population-level information—may approximate the privacy ideal of complete falsity.
One established technique for generating a synthetic database, first proposed by Donald Rubin in 1993, goes by the name multiply-imputed synthetic micro-data. We start with a situation where there is a frame of individuals who might be sampled—for example, census housing units—for whom some publicly available data are available (like addresses). A small sample of these individuals is surveyed, and we acquire some private data (like age, sex, income, health status).
We want to make it possible for researchers to interrogate the data with regard to the relationship between the private and public data, without actually revealing any private data. What the statistical authority can do is to “impute”—basically, guess, according to a statistical model based on the values that were actually observed—the private values for all the individuals who were not sampled. Then the observed private values from the survey are discarded, and multiple new samples are produced by randomly selecting from the imputed individuals. These samples are released. Researchers can make a wide range of inferences from these synthetic samples and obtain essentially the same results that they would have obtained from the original data.
David Steinsaltz is an associate professor of statistics at the University of Oxford. He blogs at Common Infirmities.
1. Anderson, G.M., Ekelund R.B. Jr., Hebert, R.F., & Tollison, R.D. An economic interpretation of the medieval crusades. The Journal of European Economic History 21, 339-363 (1992).
2. Bekir, I., El Harbi, S., & Grolleau, G. How a luxury monopolist might benefit from the aspirational utility effect of counterfeiting? European Journal of Law and Economics 36, 169-182 (2013).
3. Worland, J.C. Legacy Admit Rate at 30 Percent The Harvard Crimson www.theCrimson.com (2011).