Reprinted with permission from Quanta Magazine’s Abstractions blog.
In the 1950s, four mathematically minded U.S. Army soldiers used primitive electronic calculators to work out the optimal strategy for playing blackjack. Their results, later published in the Journal of the American Statistical Association, detailed the best decision a player could make for every situation encountered in the game.
Yet that strategy—which would evolve into what gamblers call “the book”—did not guarantee a player would win. Blackjack, along with solitaire, checkers, or any number of other games, has a ceiling on the percentage of games in which players can expect to triumph, even if they play the absolute best that the game can be played.
But for a particularly strange variant of games, it’s impossible to compute this maximum-win probability. Instead, mathematicians and computer scientists are trying to determine whether it’s possible even to approximate maximum-win probabilities for these games. And whether that possibility exists hinges on the compatibility of two very different ways of thinking about physics.
These “nonlocal” games were conceived in the 1960s by the physicist John Stewart Bell as a way to understand the bizarre quantum phenomenon called entanglement. While quantum entanglement is complicated, nonlocal games are not. You have two players, each of whom is asked a simple question. They win the game if their answers are coordinated in a certain way. Unfortunately they can’t communicate with each other, so each has to guess how the other is going to answer. Bell proved that if the players were able to share pairs of entangled quantum particles, they could enhance the correlations between their answers and win the games at higher than expected rates.
“These algorithms are completely mysterious.”
Over the past few years, researchers have elaborated on Bell’s setup, as I wrote in the recent article “The Universe’s Ultimate Complexity Revealed by Simple Quantum Games.” A 2016 paper by William Slofstra and a 2018 paper by Andrea Coladangelo and Jalex Stark proved that for some nonlocal games, the more pairs of entangled quantum particles the players share, the better they can play. This relationship holds indefinitely, meaning that players need infinite pairs of entangled particles (or entangled pairs with an infinite number of independent properties) to play nonlocal games the very best they can be played.
One consequence of these results is that it’s impossible to compute the maximum-win probability for some nonlocal games. Computers can’t work with infinite quantities, so if the perfect algorithmic strategy requires an infinite number of entangled particles, then the computer can’t calculate how often that strategy pays off.
“This is no general algorithm that, if you just put in a description of a game, will output the maximum success probability,” said Henry Yuen, a theoretical computer scientist at the University of Toronto.
But if we can’t know the maximum-win probability exactly, can we at least compute it within, say, a few percentage points?
Mathematicians have been hard at work on the question. Strangely, their approach depends on the compatibility of two very different ways of thinking about physics.
Recall that the two players in a nonlocal game need to be kept from coordinating their answers. There are two ways to ensure this. The first is to physically isolate the players from each other—to place them in their own separate rooms or on opposite ends of the universe. This spatial isolation provides a guarantee that they can’t communicate. Researchers analyze this situation using what’s called the “tensor product” model (referring to mathematical objects called tensors).
But there’s another way to ensure the players can’t conspire on their answers. Instead of separating them, you impose a different requirement: The order in which the two players measure their entangled particles and give their answers can’t affect the answers they give. “If the order in which they do their measurements doesn’t matter, then they clearly can’t be communicating,” Yuen said.
In mathematics, when the order in which things is done doesn’t affect the final answer, you say that the operation commutes: a x b = b x a. This way of thinking about nonlocal games—based on order independence instead of spatial separation—is called the “commuting operator” model.
The tensor product and commuting operator models are used in physics, particularly in the study of interactions between subatomic particles in an area of research called quantum field theory. The two models are different ways of thinking about what it means for physical events to be causally independent of each other. And while the tensor product model is more intuitive—our mind’s eye tends to picture causal independence in terms of physical separation—the commuting operator model provides a more coherent mathematical framework. This is because “spatial independence” is a kind of fuzzy idea, while a commuting relationship can be pinned down exactly.
“For people who study quantum field theory, this notion of having spatially separate things is not a natural notion,” Yuen said. “At a mathematical level it’s not a given that you can really put two independent things in two separate locations in the universe.”
Here’s what this all has to do with nonlocal games.
Computer scientists can use the tensor-product model to calculate a floor for the maximum-win probability of nonlocal games. The algorithm they use guarantees that the maximum-win probability is above a certain threshold. Similarly, researchers can use the commuting operator model to establish a ceiling on the maximum-win probability. That algorithm can promise that it lies below some threshold.
With these tools in hand, researchers want to squeeze these limits as close together as they can, like two pistons. They know they can’t make these two limits touch to produce a single exact maximum-win probability—recent work by Slofstra, Coladangelo, and Stark proved that the exact maximum-win probability is incalculable—but the closer they can bring them together, the more precisely they can approximate the maximum-win probability.
And indeed, the longer these algorithms run, the more the two pistons appear to come together, producing finer and finer approximations around an ineffable middle value that they’ll never actually reach. Yet it’s unclear whether this observed convergence continues indefinitely. “These algorithms are completely mysterious. It’s not a gradual, smooth improvement on the numbers. We just don’t understand how fast they converge,” Yuen said.
This piston strategy is premised on the two models being equivalent. It assumes that the ceiling and the floor squeeze a value in the middle. If the two models are in fact equivalent, then the two pistons really are on track to get arbitrarily closer together. (And by implication, if you can prove the pistons are on track to get arbitrarily closer together, you’ve also proven that the two models are equivalent.)
But it’s possible the two models are not different ways of representing the same thing. It’s possible that they’re different, incommensurate, and as a result this piston strategy might lead to a situation where the ceiling gets pushed down below the floor. In this case, computer scientists would lose their best strategy for approximating maximum-win probabilities. Unfortunately, no one knows for sure.
Over the last couple of years the biggest progress has come in the form of two proofs that have merely established just how hard the problem is to solve.
In 2018 Thomas Vidick and Anand Natarajan proved that approximating maximum-win probabilities for nonlocal games is at least as hard as solving other notoriously difficult puzzles such as the traveling salesman problem. Also in 2018, Yuen, Vidick, Joseph Fitzsimons, and Zhengfeng Ji proved that as the pistons close in on each other, the computational resources required to push them closer together grow exponentially.
In yet another twist to the story, the question of whether the two models are equivalent is a direct analogue of an important and difficult open problem in pure mathematics called the Connes embedding conjecture. This puts mathematicians and computer scientists in a three-birds-with-one-stone kind of situation: By proving that the tensor product and commuting operator models are equivalent, they’d simultaneously generate an algorithm for computing approximate maximum-win probabilities and also establish the truth of the Connes embedding conjecture. The achievement would win supreme plaudits across all the related fields.
Which is to say, fittingly, all the questions are deeply entangled.
Kevin Hartnett is a senior writer at Quanta Magazine covering mathematics and computer science. His work has been collected in the “Best Writing on Mathematics” series in 2013, 2016 and 2017. From 2013-2016 he wrote “Brainiac,” a weekly column for the Boston Globe‘s Ideas section.
The newest and most popular articles delivered right to your inbox!