Over the weekend, more details emerged about the U.S. federal government’s no-longer-secret digital-surveillance program code-named PRISM. The project gave the National Security Agency (NSA) and other agencies unprecedented access to data, like emails and chats, going through popular services owned by Google, Yahoo, Microsoft, and other Internet giants. After this additional information about PRISM seeped out, it seems possible that ignorance may have been a key part of the project: that Internet companies involved were kept in the dark about a project they were directly assisting—or not directly assisting, depending on how you read the phrase.
One of the many questions left by last week’s revelations about PRISM was whether Internet companies were actively helping the government to users’ personal data by providing a “back door” into their servers, or whether agencies pulled the data on their own, possibly by grabbing it “upstream” from the companies, taking advantage of the government’s access to the deep structure of the Internet. The Powerpoint presentation leaked to the Guardian and the Washington Post last week claimed that PRISM included data “Collection directly from the servers of these U.S. Service Providers.” But in the same articles, the companies firmly denied any participation. “We do not provide any government organization with direct access to Facebook servers,” said the chief security officer for Facebook. “We have never heard of PRISM,” said a spokesman for Apple. “We do not provide any government agency with direct access to our servers.” So which is it: Can the feds directly look at data going through web servers, or not?
A later Washington Post article, in which anonymous executives at some of the companies confirmed PRISM’s existence, suggests the answer is more complicated than yes or no:
According to a more precise description contained in a classified NSA inspector general’s report, also obtained by The Post, PRISM allows “collection managers [to send] content tasking instructions directly to equipment installed at company-controlled locations,” rather than directly to company servers.
So PRISM doesn’t get its info directly from company servers, it gets info from equipment hooked directly to those servers, which, based on the little we know, may be a distinction without a difference. And once that equipment is there, how involved are the corporations?
The companies cannot see the queries that are sent from the NSA to the systems installed on their premises, according to sources familiar with the PRISM process…From their workstations anywhere in the world, government employees cleared for PRISM access may “task” the system and receive results from an Internet company without further interaction with the company’s staff…
“The server is controlled by the FBI,” an official with one of the companies said. “We do not offer a download feature from our server.”
And how did this particular approach emerge? Was it dictated by the government or the companies, or based on technical requirements, or decided on a whim?
These executives said PRISM was created after much negotiation with federal authorities, who had pressed for easier access to data they were entitled to under previous orders granted by the secret FISA court.
Assuming the report is accurate, it suggests that the system may have been set up to provide the companies with plausible deniability—withholding potentially troublesome information so people or, in this case, companies don’t get pulled into any controversy over that info. The chief security officer at Facebook and spokesman at Apple said the companies didn’t provide “direct access” to their servers, which, read literally, was true, even if potentially misleading. And the spokesman may have been telling the truth when he said he’d never heard of the program: “Because the program is so highly classified, only a few people at most at each company would legally be allowed to know about PRISM, let alone the details of its operations,” according to the Post.
This kind of intentional uncertainty about the data streaming through servers is deeply embedded in the Internet. Much of the power and freedom (to whatever extent it exists) of the network rests on the idea that each message gets cut up and sent, packet by packet, through a global web of machines that pay no attention to what the message says, only that it gets to its intended destination, reunited with its siblings, and reassembled as the original message, whether it’s a 32-byte ping, a 4-gigabyte HD movie, or an unremarkable-looking, world-changing Powerpoint leaked from the NSA. The Internet’s bit-blind nature was reinforced in 1998 in the “safe harbor” part of the Digital Millennium Copyright Act, which protects web services and Internet service providers from liability for data that comes through their servers (albeit with some significant limitations). More recently, open-Internet advocates have called for more people to open up their wifi networks for everyone to use, saying that helpful people who do so won’t be held liable for bad things done by other people on their networks.
There is still much we don’t know about PRISM, and it’s hard to say whether and when we will find out the rest of the story. One thing that is clear is that, regardless what you think of the program, it pokes at one of the central tensions inherent in the Internet: to what degree it’s an open space, free from government oversight, for everyone’s use, versus a place rife with evidence too useful to be ignored by agencies trying to protect the nation. The tension is likely to go on as long as we have an Internet anything like the one we do, leaving us web users always somewhat uncertain of exactly who could be accessing the personal information we now know is not, ultimately, private.
Amos Zeeberg is the digital editor of Nautilus. Find him on Twitter (@settostun), and feel free to email him (amos[at]nautil[dot]us) feedback about this post, the blog, or Nautilus as a whole.