Nautilus Members enjoy an ad-free experience. or Join now .

During World War II, a radar operator tracks an airplane over Hamburg, guiding searchlights and anti-aircraft guns in relation to a phosphor dot whose position is updated with each sweep of the antenna. Abruptly, dots that seem to represent airplanes begin to multiply, quickly swamping the display. The actual plane is in there somewhere, impossible to locate owing to the presence of “false echoes.”1

The plane has released chaff—strips of black paper backed with aluminum foil and cut to half the target radar’s wavelength. Thrown out by the pound and then floating down through the air, they fill the radar screen with signals. The chaff has exactly met the conditions of data the radar is configured to look for, and has given it more “planes,” scattered all across the sky, than it can handle.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

This may well be the purest, simplest example of the obfuscation approach. Because discovery of an actual airplane was inevitable (there wasn’t, at the time, a way to make a plane invisible to radar), chaff taxed the time and bandwidth constraints of the discovery system by creating too many potential targets. That the chaff worked only briefly as it fluttered to the ground and was not a permanent solution wasn’t relevant under the circumstances. It only had to work well enough and long enough for the plane to get past the range of the radar.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Many forms of obfuscation work best as time-buying “throw-away” moves. They can get you only a few minutes, but sometimes a few minutes is all the time you need.

The example of chaff also helps us to distinguish, at the most basic level, between approaches to obfuscation. Chaff relies on producing echoes—imitations of the real thing—that exploit the limited scope of the observer. (Fred Cohen terms this the “decoy strategy.”2) As we will see, some forms of obfuscation generate genuine but misleading signals—much as you would protect the contents of one vehicle by sending it out accompanied by several other identical vehicles, or defend a particular plane by filling the sky with other planes—whereas other forms shuffle genuine signals, mixing data in an effort to make the extraction of patterns more difficult. Because those who scatter chaff have exact knowledge of their adversary, chaff doesn’t have to do either of these things.

TrackMeNot: blending genuine and artificial search queries

TrackMeNot, developed in 2006 by Daniel Howe, Helen Nissenbaum, and Vincent Toubiana, exemplifies a software strategy for concealing activity with imitative signals.3 The purpose of TrackMeNot is to foil the profiling of users through their searches. It was designed in response to the U.S. Department of Justice’s request for Google’s search logs4 and in response to the surprising discovery by a reporter from The New York Times that some identities and profiles could be inferred even from anonymized search logs published by AOL Inc.5

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The activities of individuals are masked by those of many ghosts.

Our search queries end up acting as lists of locations, names, interests, and problems. Whether or not our full IP addresses are included, our identities can be inferred from these lists, and patterns in our interests can be discerned. Responding to calls for accountability, search companies have offered ways to address people’s concerns about the collection and storage of search queries, though they continue to collect and analyze logs of such queries.6 Preventing any stream of queries from being inappropriately revealing of a particular person’s interests and activities remains a challenge.7

The solution TrackMeNot offers is not to hide users’ queries from search engines (an impractical method, in view of the need for query satisfaction), but to obfuscate by automatically generating queries from a “seed list” of terms. Initially culled from RSS feeds, these terms evolve so that different users develop different seed lists. The precision of the imitation is continually refined by repopulating the seed list with new terms generated from returns to search queries. TrackMeNot submits queries in a manner that tries to mimic real users’ search behaviors. For example, a user who has searched for “good wifi-cafe Chelsea” may also have searched for “Savannah kennels,” “freshly pressed juice Miami,” “Asian property firm,” “exercise delays dementia,” and “telescoping halogen light.” The activities of individuals are masked by those of many ghosts, making the pattern harder to discern so that it becomes much more difficult to say of any query that it was a product of human intention rather than an automatic output of TrackMeNot. In this way, TrackMeNot extends the role of obfuscation, in some situations, to include plausible deniability.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Twitter bots: filling a channel with noise

During protests over problems that had arisen in the 2011 Russian parliamentary elections, much of the conversation about ballot-box stuffing and other irregularities initially took place on LiveJournal, a blogging platform that had originated in the United States but attained its greatest popularity in Russia—more than half of its user base is Russian.8 Though LiveJournal is quite popular, its user base is very small relative to those of Facebook’s and Google’s various social systems; it has fewer than 2 million active accounts. Thus, LiveJournal is comparatively easy for attackers to shut down by means of distributed denial of service (DDoS) attack—that is, by using computers scattered around the world to issue requests for the site in such volume that the servers making the site available are overwhelmed and legitimate users can’t access it. Such an attack on LiveJournal, in conjunction with the arrests of activist bloggers at a protest in Moscow, was a straightforward approach to censorship.9 When and why, then, did obfuscation become necessary?

The conversation about the Russian protest migrated to Twitter, and the powers interested in disrupting it then faced a new challenge. Twitter has an enormous user base, with infrastructure and security expertise to match. It could not be taken down as easily as LiveJournal. Based in the United States, Twitter was in a much better position to resist political manipulation than LiveJournal’s parent company. (Although LiveJournal service is provided by a company set up in the U.S. for that purpose, the company that owns it, SUP Media, is based in Moscow.10) To block Twitter outright would require direct government intervention. The LiveJournal attack was done independently, by nationalist hackers who may or may not have had the approval and assistance of the Putin/Medvedev administration.11 Parties interested in halting the political conversation on Twitter therefore faced a challenge that will become familiar as we explore obfuscation’s uses: Time was tight, and traditional mechanisms for action weren’t available. A direct technical approach—either blocking Twitter within a country or launching a worldwide denial-of-service attack—wasn’t possible, and political and legal angles of attack couldn’t be used. Rather than stop a Twitter conversation, then, attackers can overload it with noise.

Tweetstorm: To combat the organization of parliamentary election protests like this one, Russian authorities flooded protest-relevant Twitter hashtags with meaningless tweets. YURI KADOBNO / Getty Images
Nautilus Members enjoy an ad-free experience. Log in or Join now .

During the Russian protests, the obfuscation took the form of thousands of Twitter accounts suddenly piping up and users posting tweets using the same hashtags used by the protesters.12 Hashtags are a mechanism for grouping tweets together; for example, if I add #obfuscation to a tweet, the symbol # turns the word into an active link—clicking it will bring up all other tweets tagged with #obfuscation. Hashtags are useful for organizing the flood of tweets into coherent conversations on specific topics, and #триумфальная (referring to Triumfalnaya, the location of a protest) became one of several tags people could use to vent their anger, express their opinions, and organize further actions. (Hashtags also play a role in how Twitter determines “trending” and significant topics on the site, which can then draw further attention to what is being discussed under that tag—the site’s Trending Topics list often draws news coverage.13)

If you were following #триумфальная, you would have seen tweet after tweet from Russian activists spreading links to news and making plans. But those tweets began to be interspersed with tweets about Russian greatness, or tweets that seemed to consist of noise, gibberish, or random words and phrases. Eventually those tweets dominated the stream for #триумфальная, and those for other topics related to the protests, to such a degree that tweets relevant to the topic were, essentially, lost in the noise, unable to get any attention or to start a coherent exchange with other users. That flood of new tweets came from accounts that had been inactive for much of their existence. Although they had posted very little from the time of their creation until the time of the protests, now each of them was posting dozens of times an hour. Some of the accounts’ purported users had mellifluous names, such as imelixyvyq, wyqufahij, and hihexiq; others had more conventional-seeming names, all built on a firstname_lastname model—for example, latifah_xander.

Obviously, these Twitter accounts were “Twitter bots”—programs purporting to be people and generating automatic, targeted messages. Many of the accounts had been created around the same time. In numbers and in frequency, such messages can easily dominate a discussion, effectively ruining the platform for a specific audience through overuse—that is, obfuscating through the production of false, meaningless signals.

French decoy radar emplacements: defeating radar detectors

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Obfuscation plays a part in the French government’s strategy against radar detectors.14 These fairly common appliances warn drivers when police are using speed-detecting radar nearby. Some radar detectors can indicate the position of a radar gun relative to a user’s vehicle, and thus are even more effective in helping drivers to avoid speeding tickets.

In theory, tickets are a disincentive to excessively fast and dangerous driving; in practice, they serve as a revenue source for local police departments and governments. For both reasons, police are highly motivated to defeat radar detectors.

The option of regulating or even banning radar detectors is unrealistic in view of the fact that 6 million French drivers are estimated to own them. Turning that many ordinary citizens into criminals seems impolitic. Without the power to stop surveillance of radar guns, the French government has taken to obfuscation to render such surveillance less useful in high-traffic zones by deploying arrays of devices that trigger radar detectors’ warning signals without actually measuring speed. These devices mirror the chaff strategy in that the warning chirps multiply and multiply again. One of them may, indeed, indicate actual speed-detecting radar, but which one? The meaningful signal is drowned in a mass of other plausible signals. Either drivers risk getting speeding tickets or they slow down in response to the deluge of radar pings. And the civic goal is accomplished. No matter how one feels about traffic cops or speeding drivers, the case holds interest as a way obfuscation serves to promote an end not by destroying one’s adversaries’ devices outright but by rendering them functionally irrelevant.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

AdNauseam: clicking all the ads

In a strategy resembling that of the French radar-gun decoys, AdNauseam, a browser plug-in, resists online surveillance for purposes of behavioral advertising by clicking all the banner ads on all the Web pages visited by its users. In conjunction with Ad Block Plus, AdNauseam functions in the background, quietly clicking all blocked ads while recording, for the user’s interest, details about ads that have been served and blocked.

What more telling evidence is there of interest than clicks on particular ads?

The idea for AdNauseam emerged out of a sense of helplessness: It isn’t possible to stop ubiquitous tracking by ad networks, or to comprehend the intricate institutional and technical complexities constituting its socio-technical backend. These include Web cookies and beacons, browser fingerprinting (which uses combinations and configurations of the visitor’s technology to identify their activities), ad networks, and analytics companies. Efforts to find some middle ground through a Do Not Track technical standard have been frustrated by powerful actors in the political economy of targeted advertising. In this climate of no compromise, AdNauseam was born. Its design was inspired by a slender insight into the prevailing business model, which charges prospective advertisers a premium for delivering viewers with proven interest in their products. What more telling evidence is there of interest than clicks on particular ads? Clicks also sometimes constitute the basis of payment to an ad network and to the ad-hosting website. Clicks on ads, in combination with other data streams, build up the profiles of tracked users. Like the French radar decoy systems, AdNauseam isn’t aiming to destroy the ability to track clicks; instead it functions by diminishing the value of those clicks by obfuscating the real clicks with clicks that it generates automatically.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Personal disinformation: strategies for individuals

Disappearance specialists have much to teach would-be obfuscators. Many of these specialists are private detectives or “skip tracers”—professionals in the business of finding fugitives and debtors—who reverse engineer their own process to help their clients stay lost. Obviously many of the techniques and methods they employ have nothing to do with obfuscation, but rather are merely evasive or concealing—for instance, creating a corporation that can lease your new apartment and pay your bills so that your name will not be connected with those common and publicly searchable activities. However, in response to the proliferation of social networking and online presence, disappearance specialists advocate a strategy of disinformation, a variety of obfuscation. “Bogus individuals,” to quote the disappearance consultant Frank Ahearn, can be produced in number and detail that will “bury” pre-existing personal information that might crop up in a list of Web search results.15 This entails creating a few dozen fictitious people with the same name and the same basic characteristics, some of them with personal websites, some with accounts on social networks, and all of them intermittently active. For clients fleeing stalkers or abusive spouses, Ahearn recommends simultaneously producing numerous false leads that an investigator would be likely to follow—for example, a credit check for a lease on an apartment in one city (a lease that was never actually signed) and applications for utilities, employment addresses, and phone numbers scattered across the country or the world, and a checking account, holding a fixed sum, with a debit card given to someone traveling to pay for expenses incurred in remote locations. Strategies suggested by disappearance specialists are based on known details about the adversary: The goal is not to make someone “vanish completely,” but to put one far enough out of sight for practical purposes and thus to use up the seeker’s budget and resources.

Apple’s “cloning service” patent: polluting electronic profiling

Nautilus Members enjoy an ad-free experience. Log in or Join now .

In 2012, as part of a larger portfolio purchase from Novell, Apple acquired U.S. Patent 8,205,265, “Techniques to Pollute Electronic Profiling.” An approach to managing data surveillance without sacrificing services, it parallels several systems of technological obfuscation we have described already. This “cloning service” would automate and augment the process of producing misleading personal information, targeting online data collectors rather than private investigators.

It is in your interest to expand this population of possible selves, leading lives that could be yours, day after day.

A “cloning service” observes an individual’s activities and assembles a plausible picture of his or her rhythms and interests. At the user’s request, it will spin off a cloned identity that can use the identifiers provided to authenticate (to social networks, if not to more demanding observers) that represents a real person. These identifiers might include small amounts of actual confidential data (a few details of a life, such as hair color or marital status) mixed in with a considerable amount of deliberately inaccurate information. Starting from its initial data set, the cloned identity acquires an email address from which it will send and receive messages, a phone number (there are many online calling services that make phone numbers available for a small fee), and voicemail service. It may have an independent source of funds (perhaps a gift card or a debit card connected with a fixed account that gets refilled from time to time) that enables it to make small transactions. It may even have a mailing address or an Amazon locker—two more signals that suggest personhood. To these signals may be added some interests formally specified by the user and fleshed out with existing data made accessible by the scraping of social-network sites and by similar means. If a user setting up a clone were to select from drop-down menus that the clone is American and is interested in photography and camping, the system would figure out that the clone should be interested in the work of Ansel Adams. It can conduct searches (in the manner of TrackMeNot), follow links, browse pages, and even make purchases and establish accounts with services (e.g., subscribing to a mailing list devoted to deals on wilderness excursions, or following National Geographic’s Twitter account). These interests may draw on the user’s actual interests, as inferred from things such as the user’s browsing history, but may begin to diverge from those interests in a gradual, incremental way. (One could also salt the profile of one’s clone with demographically appropriate activities, automatically chosen, building on the basics of one’s actual data by selecting interests and behaviors so typical that they even out the telling idiosyncrasies of selfhood.)

After performing some straightforward analysis, a clone can also take on a person’s rhythms and habits. If you are someone who is generally offline on weekends, evenings, and holidays, your clone will do likewise. It won’t run continuously, and you can call it off if you are about to catch a flight, so an adversary will not be able to infer easily which activities are not yours. The clones will resume when you do. (For an explanation of why we now are talking about multiple clones, see below.) Of course, you can also select classes of activities in which your clones will not engage, lest the actors feigning to be you pirate some media content, begin to search for instructions on how to manufacture bombs, or look at pornography, unless they must do so to maintain plausibility—making all one’s clones clean-living, serious-minded network users interested only in history, charitable giving, and recipes might raise suspicions. (The reason we have switched from talking about a singular clone to speaking about multiple clones is that once one clone is up and running there will be many others. Indeed, imagine a Borgesian joke in which sufficiently sophisticated clones, having learned from your history, demography, and habits, create clones of their own—copies of copies.) It is in your interest to expand this population of possible selves, leading lives that could be yours, day after day. This fulfills the fundamental goal outlined by the patent: Your clones don’t dodge or refuse data gathering, but in complying they pollute the data collected and reduce the value of profiles created from those data.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

“Bayesian flooding” and “unselling” the value of online identity

In 2012, Kevin Ludlow, a developer and an entrepreneur, addressed a familiar obfuscation problem: What is the best way to hide data from Facebook?16 The short answer is that there is no good way to remove data, and wholesale withdrawal from social networks isn’t a realistic possibility for many users. Ludlow’s answer is by now a familiar one.

“Rather than trying to hide information from Facebook,” Ludlow wrote, “it may be possible simply to overwhelm it with too much information.” Ludlow’s experiment (which he called “Bayesian flooding,” after a form of statistical analysis) entailed entering hundreds of life events into his Facebook Timeline over the course of months—events that added up to a life worthy of a three-volume novel. He got married and divorced, fought cancer (twice), broke numerous bones, fathered children, lived all over the world, explored a dozen religions, and fought for a slew of foreign militaries. Ludlow didn’t expect anyone to fall for these stories; rather, he aimed to produce a less targeted personal experience of Facebook through the inaccurate guesses to which the advertising now responds, and as an act of protest against the manipulation and “coercive psychological tricks” embedded both in the advertising itself and in the site mechanisms that provoke or sway users to enter more information than they may intend to enter. In fact, the sheer implausibility of Ludlow’s Timeline life as a globe-trotting, caddish mystic-mercenary with incredibly bad luck acts as a kind of filter: No human reader, and certainly no friend or acquaintance of Ludlow’s, would assume that all of it was true, but the analysis that drives the advertising has no way of making such distinctions.

Ludlow hypothesizes that, if his approach were to be adopted more widely, it wouldn’t be difficult to identify wild geographic, professional, or demographic outliers—people whose Timelines were much too crowded with incidents—and then wash their results out of a larger analysis. The particular understanding of victory that Ludlow envisions is a limited one. His Bayesian flooding isn’t meant to counteract and corrupt the vast scope of data collection and analysis; rather, its purpose is to keep data about oneself both within the system and inaccessible. Max Cho describes a less extreme version: “The trick is to populate your Facebook with just enough lies as to destroy the value and compromise Facebook’s ability to sell you”17—that is, to make your online activity harder to commoditize, as an act of conviction and protest.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Manufacturing conflicting evidence: confounding investigation

The Art of Political Murder: Who Killed the Bishop?—Francisco Goldman’s account of the investigation into the death of Bishop Juan José Gerardi Conedera—reveals the use of obfuscation to muddy the waters of evidence collection.18 Bishop Gerardi, who played an enormously important part in defending human rights during Guatemala’s civil war of 1960–1996, was murdered in 1998.

As Goldman documented the long and dangerous process of bringing at least a few of those responsible within the Guatemalan military to justice for this murder, he observed that those threatened by the investigation didn’t merely plant evidence to conceal their role. Framing someone else would be an obvious tactic, and the planted evidence would be assumed to be false. Rather, they produced too much conflicting evidence, too many witnesses and testimonials, too many possible stories. The goal was not to construct an airtight lie, but rather to multiply the possible hypotheses so prolifically that observers would despair of ever arriving at the truth. The circumstances of the bishop’s murder produced what Goldman terms an “endlessly exploitable situation,” full of leads that led nowhere and mountains of seized evidence, each factual element calling the others into question. “So much could be made and so much would be made to seem to connect,” Goldman writes, his italics emphasizing the power of the ambiguity.

The thugs in the Guatemalan military and intelligence services had plenty of ways to manage the situation: access to internal political power, to money, and, of course, to violence and the threat of violence. In view of how opaque the situation remains, we do not want to speculate about exact decisions, but the fundamental goal seems reasonably clear. The most immediately significant adversaries—investigators, judges, journalists—could be killed, menaced, bought, or otherwise influenced. The obfuscating evidence and other materials were addressed to the larger community of observers, a proliferation of false leads throwing enough time-wasting doubt over every aspect of the investigation that it could call the ongoing work, and any conclusions, into question.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Finn Brunton is an assistant professor of media, culture, and communication at New York University and the author of Spam: A Shadow History of the Internet.

Helen Nissenbaum is a professor of media, culture, and communication and computer science at New York University. She is also the author of Privacy in Context and one of the developers of the TrackMeNot software.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

References

1. Finkel, M. On Flexibility: Recovery from Technological and Doctrinal Surprise on the Battlefield Stanford University Press, Palo Alto, CA (2011).

2. Cohen, F. The use of deception techniques: Honeypots and decoys. In Bidgoli, H. (Ed.), Handbook of Information Security Wiley, Hoboken, NJ (2005).

3. Howe, D. & Nissenbaum, H. TrackMeNot: Resisting surveillance in web search. In Kerr, I., Luckock, C. & Steeves, V. (Eds.) Lessons From the Identity Trail: Anonymity, Privacy and Identity in a Networked Society Oxford University Press, New York, NY (2009).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

4. Gonzales v. Google, Inc., Case (Subpoena) CV 06-8006MISC JW (N.D. Cal.).

5. Barbaro, M. & Zeller Jr., T. A Face Is Exposed for AOL Searcher No. 4417749. The New York Times (2006).

6. Singhal, A. Search, Plus Your World. Google official blog; googleblog.blogspot.com (2012).

7. Toubiana, V. & Nissenbaum, H. “An analysis of Google logs retention policies. Journal of Privacy and Confidentiality 3, 3-26 (2011).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

8. Maslinsky, K., Koltcov, S., & Koltslova, O. Changes in the topical structure of Russian-language LiveJournal: The impact of elections 2011. Higher School of Economics Research Paper No. WP BPR 14/SOC/2013 (2013). Retrieved from DOI: 10.2139/ssrn.2209802

9. Shuster, S. Why Have Hackers Hit Russia’s Most Popular Blogging Service? Time.com (2011). The number of Russian accounts cited in the article appears to be the total number of accounts rather than the number of active accounts. We believe activity to be a more meaningful measure.

10. Parkhomenko, Y. & Tait, A. BlogTalk. Index on Censorship 37, 174–178 (2008).

11. Russia: Control From the Top Down. Enemies of the Internet (2014).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

12. Krebs, B. Twitter Bots Drown Out Anti-Kremlin Tweets. KrebsonSecurity.com (2008).

13. Friedman, A. Hashtag Journalism. Columbia Journalism Review (2014).

14. Le Gouvernement Veut Rendre les Avertisseurs de Radars Inefficaces. Le Monde (2011).

15. Goodchild, J. How to Disappear Completely. CSOonline.com (2011).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

16. Ludlow, K. Bayesian Flooding and Facebook Manipulation. KevinLudlow.com (2012).

17. Cho, M. Unsell Yourself—A Protest Model Against Facebook. Yale Law & Technology yalelawtech.org (2011).

18. Goldman, F. The Art of Political Murder: Who Killed the Bishop? Grove Press, New York, NY (2008).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Reprinted with permission from Obfuscation: A User’s Guide for Privacy and Protest by Finn Brunton and Helen Nissenbaum, published by the MIT Press.

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.