Nautilus Members enjoy an ad-free experience. or Join now .

The VENONA project represents one of the most successful counter-intelligence attacks of the Cold War. It revolved around an encryption system, called the “one-time pad scheme,” that was completely unbreakable, but required generating a new, random encryption key for every message. This was hugely inconvenient, and prone to human error. And error is exactly what happened. Someone on the Soviet side (it is still not known who) began to reuse keys, allowing the decryption of about 3,000 top-secret messages by the west.

The story teaches us that security needs not just strong codes, but an easy-to-use and secure implementation. To this day, attacks rarely break the encryption algorithm (cryptographers have done their job very well!), but usually find ways to avoid it entirely instead, by discovering some unexpected weaknesses in the implementation. The modern computer security age has therefore relied on a reduction of the “cryptographic overhead”—cutting fat from the system and eliminating avenues for unexpected attacks—as much as it has been about codes per se. This has meant inventing new systems, as well as improving existing ones.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

In a seminal 1949 paper, Claude Shannon defined the notion of perfect encryption­: It is achieved when an encrypted message, or cipher, becomes entirely independent of the original message. He also defined its cost: The encryption key needs to be at least as long as the message. Encrypting a streaming movie in real-time would require a movie-length key. This suggested that cryptography had a limited future, at least outside of espionage.

Fortunately, cryptographers eventually realized that overhead requirements could be drastically reduced, in exchange for what was, in effect, only a theoretical reduction in security. These reductions came in the form of two reasonable assumptions: An attacker cannot carry out an arbitrarily large number of computations in a given time, and that attacker has a tiny, but non-zero, chance of actually breaking encryption.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Encrypting a streaming movie in real-time would require a movie-length key.

Subtle as these assumptions are, they set the stage for a cryptography revolution. The amount of key material demanded by the one-time pad scheme turned out to be a waste of resources—encryption that was almost as strong could be achieved with short, reusable keys. Today, a set of encryption schemes standardized in 2001 (built around the Advanced Encryption Standard, or AES) allows movies to be encrypted and streamed in real time, even on relatively small devices, such as tablets and smartphones, at almost no computing cost.

With the easing of one type of overhead, however, came another. The AES are all symmetric encryption schemes, meaning that the key for encoding the message is the same as the one for decoding it. The difficulty is that the sender and receiver need to agree on the key ahead of time, which is feasible if there are only a few people involved. However, modern-day communication happens among parties that are large and dispersed. If each sender needed to arrange a unique key with each receiver, nothing like the quantity of information now exchanged on the Internet would be possible.

The solution to the problem lies in so-called public-key or asymmetric-key cryptography, first suggested in the seminal work of Whitfield Diffie and Martin Hellman in 1976. Metaphorically, in an asymmetric scheme, Alice sends Bob an encrypted message by first asking him to send her an open padlock. This padlock can be intercepted and even “copied” by an attacker, Eve. Alice then puts her message into a box, closes it with Bob’s padlock, and sends it back to him. Eve can observe the locked box, but can’t open it because she doesn’t have Bob’s key. Bob then opens it with his private key, and reads the message. Alice and Bob did not have to secretly exchange keys, or even know who they were communicating with in advance. In addition, since Bob’s padlock is not specific to Alice, it can be reused by other parties wishing to communicate with Bob. The process effectively requires two keys: The decryption key is known only by Bob, but the encryption key (represented by the padlock) is public.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

This solves the problem of distributing keys between many dispersed parties. But it introduces another problem: For short messages, public-key cryptosystems are as much as 100 times slower than symmetric schemes. That’s because it is easier to communicate in private when Alice and Bob have some common secret from Eve, than it is when Eve knows everything that Alice knows, but only Bob knows the secret. But here, too, researchers have found an answer. For longer messages, we can first encrypt a short new key with the public-key approach, and then use that key to encrypt the message under a much faster private-key scheme. Similarly, for many consecutive short messages, the same encrypted key can be used for the entire duration of the communication, and then erased when the communication is over. This is called the “ephemeral key” approach.

So public-key systems solve a lot of problems. Nevertheless, their use is limited by yet another key distribution problem: making sure that the right public keys are used. If an attacker can convince someone to use a rogue public key, it is possible for him gain access to privileged information. Currently, the authentication of public keys is performed by Certificate Authorities (CAs), which verify the identity of the public-key owner and issue a certificate saying so. These authorities need to ensure that they do not issue certificates to the wrong users, or compromise the secrecy of their own signing keys.

Subtle as these assumptions are, they set the stage for a cryptography revolution.

Unfortunately, this doesn’t always happen. The major browsers (Firefox, Chrome, Internet Explorer, and Safari) include a “hardwired” list of trusted CAs, called “root CAs.” Root CAs are allowed to certify smaller CAs. According to the SSL Observatory project run by the Electronic Frontier Foundation, in early 2013, Mozilla, Microsoft, and Google had certified about 150 root CAs, which in turn certified about 1,500 other CAs as entities whom average users should unconditionally trust. Needless to say, most of these entities are completely unknown to average users, and the odds that all of them will avoid security compromises appear alarmingly low.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

In fact, in the last few years several high-profile scandals have surfaced around various CAs. In 2011, a fraudulent Google certificate was signed under the key of a Dutch CA called DigiNotar, which was used (by still unknown entities) for serious “man-in-the-middle” attacks on Google users in Iran. Around the same time, the Turkish CA TURKTRUST discovered it had “accidentally certified” (according to them) two intermediate CAs in August 2011. One of these was used to sign fake Google certificates.

In principle, public key systems with proper CAs provide extremely strong protection. But the architecture around that protection is itself vulnerable. The Edward Snowden leaks drove this point home. They showed us that the most successful large-scale attacks against cryptography, including the bulk of the mass surveillance program allegedly run by the NSA against American citizens, access privileged information in the cleartext form: either before encryption, or after decryption, or simply sitting in unencrypted form in places which were assumed to be secure. Other attacks have exploited imperfectly random key generation.

A physical parallel to these software attacks are so-called “side channel” attacks, in which the intruder gets access to some (seemingly benign) physical information surrounding a normal run of a given cryptographic system, such as timing information, power consumption, acoustic data, or cache misses. Quite remarkably, by carefully comparing this information with the specifications of the cryptographic algorithm being used, the attacker can often completely recover the secret key. These attacks were pioneered by Paul Kocher in 1996. He exploited the fact that the running time of the popular public-key cryptosystem, RSA, is highly correlated with the contents of its secret key (specifically, the number of 1s in the bit decomposition). By manipulating public inputs to the RSA algorithm, he used repeated executions with the same key to perform a statistical correlation analysis, and recovered the key completely.

The cryptographic research community, in recognition that security relies on more than code strength alone, is broadening modern-day systems to protect against non-cryptographic attacks. New “leakage-resilient” cryptosystems are able to resist large classes of side channel attacks. Cryptographers have also developed tools to meaningfully compute on encrypted data, without requiring any decryption steps at all (until the very last step, when the final result is decrypted). In a scheme called “fully homomorphic encryption,” it is possible for a user to send an encrypted query to Google and receive an encrypted reply, without Google having any idea about the actual search that was carried out (these algorithms are too slow to be feasible at the moment, but are being worked on). Researchers are also studying how to secure, control, and compute on data stored in the cloud, and starting entirely new initiatives, like anonymous credentials and privacy-preserving data mining.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

These challenges represent a significant broadening and expansion of the mission of cryptography research—often with the goal of extending capabilities well beyond traditional encryption and authentication, and reducing the overhead involved in implementation. We have come a long way since the VENONA project and single-use keys, and continue to live the lesson that security should not just be strong, but also seamless.

Yevgeniy Dodis is a professor of computer science at New York University, has co-authored more than 100 scientific publications, and is the recipient of National Science Foundation CAREER Award, Faculty Awards from IBM, Google, and VMware, and Best Paper Award at the 2005 Public Key Cryptography Conference. 

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.