ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

Scott Aaronson, theoretical computer scientist and professor at the Massachusetts Institute of Technology (MIT), runs a popular blog called “Shtetl-Optimized.” Which is a curious title, given its focus on computational complexity. When I asked Aaronson about the connection, he replied that he saw himself as someone designed for a different era—like, for instance, the 19th-century Jewish village, or shtetl, from which he descended, and where studying was, for many, the central activity of life.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Completing his undergraduate studies at age 18, and earning tenure at MIT at age 31, Aaronson has certainly made study a central part of his own life. But it’s not just computer science that draws his interest. His book, Quantum Computing Since Democritus, touches on consciousness, free will, and time travel. A recent discussion on his blog about gender roles in science has drawn 609 comments as of this writing. And he does not shy away from public debate, having become one of the most persistent critics of claims made by startup D-Wave Systems that they are selling operational quantum computers. Why not just turn a blind eye and let those claims slide? “This is just not who I am,” says Aaronson.

In person, Aaronson is animated, self-deprecating, and thoughtful to a tee—all in the finest shtetl tradition.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

View Video

You’ve suggested a new way to teach quantum mechanics. How?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

How has quantum mechanics changed our understanding of information?

What is the P versus NP problem, and why is it important?

How many attempted proofs for P=NP have you seen?

What would the effect be of a proof showing that P=NP?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

This sounds like good fodder for a sci-fi story.

How did the computer drive us to change how we understand information?

Is information letting some measure of teleology back into physics?

Why do you get involved in so many debates?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

You’ve been one of the most vocal critics of D-Wave, a company that claims it has built a quantum computer. Why?

Why do you keep calling yourself a pessimist and a curmudgeon?

How did you get involved in a controversy surrounding a laser printer TV ad?

Did your father, who was a science writer, get you into science?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

You have a very widely read blog. What motivates you to keep it going?

Why is your blog called Shtetl-Optimized?

What would you be if you weren’t a scientist?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Interview Transcript

You’ve suggested a new way to teach quantum mechanics. How?

If you’re trying to teach quantum mechanics, let’s say to undergraduates—especially those who are good at math but who aren’t physicists, who aren’t immersed in physics—you have to tell a story about how all of quantum mechanics can be thought of as flowing from just a few simple principles. That’s the only way to teach it to people; that’s the only way to get it to make sense. And the truth is that in the case of quantum mechanics, there isn’t this whole diverse collection of facts that you have to learn.

If I told you to define something that’s like probability theory, but based on the 2-norm rather than the 1-norm, I want the basic objects not to be vectors of probabilities (vectors of non-negative real numbers that add up to one), but vectors of amplitudes (complex numbers where the sum of the squares of their absolute values adds up to one). I’m asking for something that’s like probability theory, but based on complex numbers instead of non-negative reals. In this case, I have basically forced you to invent quantum mechanics. You have very little choice after that point; and if you just make all the obvious choices from that point forward, you will wind up with quantum mechanics.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The point is that this is just one leap that you have to make. This is the one thing you have to accept, from which everything else will follow. Whereas, when you do it the historical way, you basically get this whole collection of different phenomena; and this is still the way that most popularizations [of quantum mechanics] tend to treat it. It’s something that I always found unsatisfactory. You can’t measure position and momentum at the same time. The electron is in some kind of smear of probability wave around the nucleus, which is somehow not just a fancy way of saying that you don’t know where it is but [actually] something else. Far away particles can be entangled with each other; and particles can tumble through walls. Schrödinger’s cat can be in a super position of alive and dead. The electron can have quantized energy levels so it just jumps up and down … And you just learn this collection, like I could rattle off 20 more things like that.  And so you just say, “Oh, well quantum mechanics is just some crazy land where everything violates our intuition.” And just when you think you’ve understood something there’s yet another crazy thing that still doesn’t make sense.

Actually, all of these things just follow … The state of the world is this vector of amplitudes, and the amplitudes are complex numbers, and they behave differently than probabilities that we’re used to. Everything follows from that, that one change to the rules of probability. And they’re all just different manifestations of it.

How has quantum mechanics changed our understanding of information?

I would just define information to be a measure of how surprised you are on learning something. It’s a measure of how many different configurations something could be that you want to treat as different. So [for] a single bit of information that could be in two different configurations, when you learn which configuration it is, then you’ve gained one bit of information. Or you’ve reduced your amount of ignorance or increased your amount of surprise by one bit.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Now classical information has certain properties, like it can always be copied. You can make as many copies of “a bit” as you want; you can read “a bit” without changing that bit. These are so obvious that they’re not usually even stated explicitly. But quantum information does not have those properties.

So quantum information, things like: you have a quantum state, which is, let’s say a super position of zero in one state, and you can measure that state. If you measure it, you’re going to either get a zero or a one. So measurement of a quantum bit is just going to give you a classical bit. But which classical bit you see will be probabilistic, right? You can measure the same state in the same way, and yet sometimes you’ll see a zero and sometimes you’ll see a one. You’ll see each one with a certain probability. The probability of each outcome depends both on the state and on how you choose to measure it. Did you choose to measure the position? Or the momentum? Or did you choose to measure this qubit and the basis—did you ask whether it was a zero or a one? Or did you ask whether it was zero plus one or zero minus one, which is also a question that you’re allowed to ask. So quantum information has the property that you get to choose one question to ask. Then that question has a classical answer that you then get. And then once you’ve gotten that answer, the quantum state itself physically changes in order to be consistent with the answer that you just got. So if I measured a qubit that was in a super position of zero and one and I got a one out let’s say, then the qubit is now one. So I can measure it a second time and it’s going to say, “Hey, I’m just one, okay.” 

I’ve compared it to, you know, there’s a monster under your bed, but every time you look, it goes away. There’s no monster, right. Except that in order to explain the probabilities that you do see, you have to postulate that the monster was there when you weren’t looking. So that’s one property that quantum information has—that you get one chance to measure it and you get one answer and then the rest of the quantum state just disappears. It’s a use-once resource in that sense. But a second closely related property is that there’s no general way to copy quantum information. In fact that’s a theorem. It’s called the no-cloning theorem. There’s no machine that will take a general quantum state and output, for example, two copies of that state.

People like to talk about how information wants to be free. It’s always going to be counterproductive to have copy-protected software or copy-protected movies or things like that because people will just always make copies of information, if it’s out there. But quantum information is not like that. Quantum information is more like traditional economic commodities, like gold or oil or something. 

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

What is the P versus NP problem, and why is it important?

The P versus NP question is basically the question of: If you can program your computer to efficiently recognize some pattern, or the solution to some problem, say, then can you also program your computer to efficiently find the solution to that problem? Think for example of a Sudoku puzzle—or imagine a thousand-by-thousand Sudoku puzzle, a really enormous—you start filling it in and there’s just this astronomical number of possibilities to try. It’s not that a computer could never be programmed to do it, because at worst you could just program your computer to try every possible assignment of numbers to all million of these squares, but this would take much much longer than the age of the universe. The whole earth will have disintegrated before your computer would have made a dent in the problem. On the other hand, if someone says, “Here is the solution. I’ve solved it. Here it is,” then you can program your computer to very, very quickly check whether the solution is correct or not. You just check each square and make sure that it’s okay.

So very often, when we’re dealing with puzzles, there’s this difference between finding and verifying. To find a solution, they take a number of steps that grows exponentially with the size of the problem. With each new square you add, the number of possibilities you would have to check doubles, or worse. [In contrast], checking the solution only takes a number of steps that increases, as we say, polynomially with the size of the problem. It increases like the size of the problem raised to some fixed power, like one or two, and this is sort of our rough and ready criterion for “efficient” in computer science; that the amount of time you need increases polynomially with the problem size. So basically, P is the class of all the problems that a standard digital computer can solve in a polynomial amount of time. P stands for polynomial time and NP stands for non-deterministic polynomial time. Don’t worry too much [about] what that means, but it’s basically the class of all the problems where if someone tells you the solution, you can [verify] it in a polynomial amount of time. But finding the solution might take you exponential time.

The question of whether P=NP is just the question of: Can every efficiently checkable problem also be efficiently solved? Once people understand what the question is asking, I think most people’s intuition would be: of course not! Like, what are you even asking? But you could say that for the purposes of everyday life it’s perfectly fine to just assume the answer is “no.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In fact, this has practical importance for cryptography. Almost all the cryptography that we use on the Internet … Any time you order something from Amazon [for example], your credit card number is protected by a cryptographic code that depends, among other things, on P not equaling NP.  In fact it depends on much stronger things than that. For example, it depends on the belief that multiplying to enormous numbers—enormous prime numbers of thousands of digits—is a much, much easier task than factoring the resulting composite number back into primes, which accords with our experience and as far as mathematicians know, it’s true; but no one has proved it. And if you could find some fast algorithm to factor numbers into primes, then you could break almost all the cryptography that we use on the Internet.

Now, if you could prove P=NP, then that would give you that fast-factoring algorithm and that would also give you 10,000 other things of enormous importance for all sorts of domains besides just cryptography. Like, industrial optimization, designing an airplane wing that optimizes the air flow; designing drugs that behave in a way that you want; doing machine learning, so training a neural network to recognize patterns. These are all things that could be dramatically sped up, you know, if P=NP, right. Now almost all of us in computer science believe that probably, P is not equal to NP. I like to say that if we were physicists, we would have just declared that to be a law of nature and been done with it. But because we’re mathematicians we have to admit that, well, no one actually has a proof of this yet. We hope eventually to be able to prove such things. The difficulty is actually rather easy to understand. It’s very hard to prove a negative. It’s very hard to rule out that there could be any fast algorithm for solving all these NP problems that anyone could ever come up with in the future. But this is what’s asked for.

How many attempted proofs for P=NP have you seen?

I do get a proof of P=NP in my inbox about every week, roughly every week. Those are actually the easier ones to deal with than the proofs of P not equal to NP because if someone says P=NP, then you can always say, “Well, that’s great. Here’s a 2,000-digit number. Why don’t you send me back the factors of it and then we’ll talk some more.” And then you know, you don’t hear back from that person. Or they say, “Well, I need you to fund me in order to develop my algorithm further,” or something. Sorry, man. Right?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But if someone says they have a proof that P is not equal to NP then if you really want to refute it, you have to go into it. It’s actually not as hard as you’d imagine because there are just a few errors that tend to recur over and over again. But often, the error is just obfuscated in some mess of notation and it can be annoying to do it, so I actually came up with a protocol where if the person is willing to pay $100 or so, then I’ll have a grad student at MIT go over it and look for the error and correspond with them. And we’ve done that several times.

If this is something that happens to you every week then the next person claiming that they have a proof of P=NP doesn’t even raise your blood pressure. It’s like, “Okay, yeah. That’s great buddy.” I just had someone call me on the phone the other day and say, “Well, I’ve got a proof of P not equal to NP” But the normal task is to prove that NP is greater than P, right? Of course P is contained in NP because if you can solve a problem, then you can also verify the solution by simply solving it, right? And he says, “Oh, but I can prove that P is not even contained in NP.”  And so I’m like, alright, I don’t think you understand the question. 

What would the effect be of a proof showing that P=NP?

So let me sort of try to explain to you what would be some of the consequences if P=NP. And I have to be careful. I have to say, “If P=NP, and furthermore, the algorithm were really efficient in practice,” because there’s always this caveat that people will say: Well it could be that P=NP, but the algorithm for solving the arbitrary NP problems would take like n to the thousandth time or something, and [though] that would be theoretically polynomial, that wouldn’t be efficient in reality. So we have to say, look, when I talk about the consequences of P=NP, I really mean the consequences if P=NP, and moreover the algorithm were really efficient enough to implement, okay?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

I think maybe the most exciting consequence would not be that you could break all the cryptography that we use on the Internet, or anything like that; it wouldn’t even be speeding up the solution of optimization problems. It would really be that for any set of data, you could try to find the smallest model that explains that data and do so efficiently. For example, you could just take all the historical stockmarket data and then say: What is the set of weights for a neural network, that will optimize that network’s ability to predict all the stockmarket prices that there have been in the past. And then once you trained that neural network, you could use it to try to predict the future stockmarket prices. Or you could even say: What is the shortest program that in a small amount of time will output the complete works of Shakespeare. You could try to reverse engineer intelligence or creativity in these sorts of ways by just feeding your computer with a massive amount of data and [then] giving it the problem: Come up with a small model that is able to reproduce this data from scratch. That’s an NP problem. That’s a problem where, if the computer came up with such a model, then it would be a simple matter to verify that it is correct. So if P=NP then you could go and find the model. 

It’s true that there would still be a difficulty, right? Just because you found the best possible compression of Shakespeare’s 37 plays doesn’t [mean] that you could then use the resulting program to write the 38th play, for example. Maybe, or maybe not. But where it gets really cool is that you might say certain types of machine learning techniques would enable you to generalize from what you found [and] certain types would not enable you to. But [even that very] problem—of coming up with good machine-learning algorithms [that allow you to] generalize from what you found—could be sped up by using the fact that P=NP. So you could say: What is the best design for a neural network where I will be able to learn things and [where] I will be able to feed in lots of data and just create programs that will learn for themselves from that data. Among all the short, efficient programs, which one does the best on this task set? If P=NP then you can find these things and you can get some self-improvement. And you can actually apply P=NP to the problem of finding better ways to exploit the fact that P=NP.

This sounds like good fodder for a sci-fi story.

Lance Fortnow already did this. He has this book called The Golden Ticket where he does have a whole chapter that’s constructing a science fiction scenario where it was discovered that P=NP. The problem with this is [that] I’ve read a little bit of hard sci-fi—Vernor Vinge, Greg Egan, people like that—and often, they want to write about a greater than human intelligence, right? And it’s essentially impossible to write any good science fiction about a greater than human intelligence because as soon as you have one, then it starts thinking thoughts that you, the writer, are not able to think, by definition, right? I guess what I’m trying to say is: How do you write fiction about the singularity, right? People like to speculate about this technological singularity where computers become more intelligent than humans, and then they start optimizing their goal, or optimizing their utility function—whatever it happens to be—and people have come up with whimsical examples. Maybe the AI will have the goal of just converting the entire observable universe into paperclips and will just devote immense intelligence toward that [which] we can’t even conceive of, toward that goal. And there will just be this ball of paperclips expanding outward from the earth at the speed of light. This is Eliezer Yudkowsky’s example.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But the truth is that it’s sort of like a dog trying to write fiction about us, right? In fact, the fiction that you can read about this kind of subject just tends to be totally unconvincing. The same thing with fiction that involves extraterrestrials who are supposed to be much, much smarter than humans. But in fact, when you read about them, like they’re never actually smarter than the person who wrote the book. So with P=NP, I feel almost similarly. I could try to talk about the first few steps of what would happen. Okay, someone breaks all the cryptography on the Internet. Maybe they tell the NSA about it, or maybe they just keep it to themselves. And maybe they try to extort money. And then there have been some crummy TV shows and stuff that have revolved around this kind of idea, right? And you can tell stories, you know, of someone designing better drugs to fight cancer—this is what Lance Fortnow does in his book. But then very quickly, you get into the stage where algorithms are efficient, algorithms that exist because P=NP, are being used to design even better algorithms or better machine learning methods and so on. And what happens after that point I think is very hard for us to say.

How did the computer drive us to change how we understand information?

The development of the digital computer is really the thing that caused us to think explicitly about information, much more than we ever had before, and develop explicit theories about information processing. Often, there are people who will say, “Oh, well, you talk about the brain as a computer, but that’s just because computers happen to be the technology of our particular era and before that, people would say that the brain is a clockwork because clocks were the technology of that era and so forth.” People will say this like [it’s a] very, very wise thing to say and it gives you a view of the computer as just this one passing fad.

The problem with thinking about it [in] that way is that the computer is, by definition, essentially the universal machine. It is the machine whose function is to be able to simulate any other machine in the universe. So in that sense, the computer is not just another technology, right? It’s not like a toaster or something, where it just has this one thing that it does. And this is a theoretical point that [Alan] Turing appreciated, that he proved in his great paper of 1936. One of the things he proved there is that you can create a single Turing machine that will emulate any other Turing machine [that] is described to it in its memory.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

I like to call that the existence of the software industry lemma. This is what says that you can have software. But this is also something practical. We know that we don’t have to buy a different computer for playing games and for word processing and for email and for browsing the web and for all the other things we want to do with a computer. I mean we can; but we don’t have to. And in fact, we used to have to carry all different sorts of devices around with us—like maybe a map to read, and a phone, and a compass, and [a] notepad; and now you can just take your smartphone. And because of universality, because the computer is a universal device, you can have this one device that does everything for you. Not only everything that we’ve thought of up to this point, but [also] all sorts of future things that haven’t been invented yet. Whatever it is, there will probably be an app for it.

Sometimes, people will also lament that the amount of technological innovation in society actually seems to have decreased since maybe the 1950s or ’60s. There aren’t people thinking big about things like space travel or massive infrastructure projects or totally new energy sources in the way that they were in the 1950s and ’60s. And they’ll usually say the only place where innovation is still happening is in computers. That’s not really true, but [they’ll say] the only place where you really see this really strong culture of building new things is in software.

What I’m trying to say is that software is like the limiting point of any technology. You take it far enough and then you want to put computers there because computers are universal machines. So they’re going to be useful just about anywhere. And then the problem will become a problem of how do you program those computers. 

Is information letting some measure of teleology back into physics?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

So Steven Pinker, in one of his books, used the example of: How can we explain why one person is going to meet another one? Let’s say they arranged to meet in a hotel lobby at 10:40 a.m. and then the person shows up there and naively, we would say well, you expect this person to be there because you talked to them and they said that they would show up there and that was their intention. They’re going to take actions in order to carry out their intention—like getting into a taxi, telling them the name of the hotel—and so that’s why they’ll be there.

But then, you learn a little bit and you could say, “Oh well, that’s all just mystical talk. All these intentions don’t really exist.” What’s really going on is that certain neurons fire in a certain pattern and there were certain chemical reactions that took place and certain movements in the physical world and that led to the outcome you’re talking about. And then you learn more and you say well that’s just no truer than the other way of describing it and it’s much less useful. The other way of describing it is equally true and it is more appropriate than what we’re talking about. You might as well just say that the person showed up because that’s what they wanted to do, because you’re not introducing teleology into the laws of physics by saying that. You are using a powerful shorthand. You’re using a powerful expressive language for talking about particular physical objects like brains that are organized in certain ways so as to have intentions. And if that language is available to you, you might as well use it.

Why do you get involved in so many debates?

Look, there are cases where I sort of can recognize the wisdom of holding back, the wisdom of not saying something, but then I simply don’t have that wisdom. Maybe I would be a better person if I had it. But there are cases where even I can easily recognize that there is something that’s true, that I’m definitely right, that it’s true but still it would be a really bad idea to say it out loud. Like if someone says something that’s really stupid in a discussion, you don’t have to say, “You’re a stupid idiot,” even if that’s the case. Because there is no reason to attack people; it doesn’t serve any purpose. But then when falsehoods actually grow fangs and are being used to do bad in the world, right, then I think the calculus changes.  

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

You’ve been one of the most vocal critics of D-Wave, a company that claims it has built a quantum computer. Why?

Well, in some sense, it wasn’t even really my choice to do this. My first instinct was to just leave them be, just leave them to do their own thing, and just wish them success. Hope they come up with something good. But the problem came when D-Wave just started making these huge announcements to the press that in the public mind, came to sort of define what quantum computing was all about because they were the ones out there saying we’ve actually built practical quantum computers and we’re selling them. And this would get reported like unbelievably uncritically. Third grade kind of errors would just get reprinted totally uncritically and then because I happen to have this blog about quantum computing and because again, I just happened to be someone who lacks the wisdom to keep his mouth shut, people would keep emailing me saying well, look, did you see that D-Wave made another announcement? Are you going to respond to this?  And so then it would be like a challenge to me, that if I don’t respond then I’m implicitly accepting that what they’re saying is true. So I have to respond. 

I started out with I think, one little FAQ that I put up about D-Wave, just trying to clarify the situation. But then, even the mildest things that I would say would just be interpreted as, “A-ha! This elitist ivory tower academic is attacking the company that’s building real quantum computers and blah blah blah blah blah.” It really surprised me that people would just take such a hard ideological line on it because it seems obvious—if you’re in the business world, you want to know: “What does the thing actually do?”

Why do you keep calling yourself a pessimist and a curmudgeon?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Well, I mean, people just seem to treat me that way, like when I talk about D-Wave. I feel like it’s kind of in an even-handed way (when I talk about D-Wave, for example). Obviously, the burden is on them to provide the evidence for the claims that they’re making and obviously it’s our role as scientists to be skeptical about it, but then that gets interpreted by other people as I’m a curmudgeon. You know, if you get called it enough times eventually you say alright fine, I’ll own it. I’ll wear that label with pride. But it’s not how I feel.

How did you get involved in a controversy surrounding a laser printer TV ad?

Well, there was a printer commercial that aired in Australia eight years ago, I guess. The commercial involved two supermodels talking at a bar and one of them says, “Well, but if quantum mechanics is not about particles or waves or matter or energy, what is it about?” and the second one says, “Well from my perspective, it’s about information and probabilities and observables and how they relate to each other.” And then the first one says, “That’s interesting.” And then it shows a Ricoh printer and it says, “A more intelligent model.” Kind of a dumb joke but then someone who reads my blog and who lives in Australia sent me a link to this, where I could watch this commercial on YouTube and they said isn’t this, aren’t they just plagiarizing your lecture notes? And you know, in fact, yeah they were. The two lines in the commercial that didn’t come from my lecture notes were: “That’s interesting,” and “A more intelligent model.” 

I was amused. I wasn’t really sure how to respond to it so I just posted a blog post which was called, “Australian actresses are plagiarizing my quantum mechanics lecture to sell printers.” And I just said I tried to think of a witty title for this but really, I just can’t improve on the actual situation and I just gave the link and I just said what do people think I should do about this? Should I be flattered? Should I be calling a lawyer? And then, this was kind of the first thing I ever did on my blog that blew up in a way that I hadn’t expected. I think the next day, it was in The Sydney Morning Herald and it was in various newspapers and I was in Latvia at the time visiting a colleague there, but I got calls in my hotel room in Latvia from journalists because—“MIT professor accuses an ad agency of plagiarism!” This is one of the things that I’ve learned by the way, that since coming here it has the disadvantage that I can never just be an individual doing something. It’s always like MIT professor does such and such, right? Which is a type of responsibility that I don’t really want.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Did your father, who was a science writer, get you into science?

Well I think it was because of him that I was exposed from a very early age to the fact that these sorts of things were out there. So when [Arno Allan] Penzias and [Robert Woodrow] Wilson won the Nobel Prize in Physics in the 1970s for discovering the cosmic microwave background radiation, [my father] was a writer for Bell Labs at the time and it was his job to make that into good PR for Bell Labs I guess and he knew these people and he interviewed Steven Weinberg and John Wheeler. So he had talked to these physicists and so it was very much a part of the atmosphere.

Even when I was 3 or 4 or 5, he would be telling me about the speed of light, that it’s 186,000 miles per second—I was very interested in specific numbers at that time—and you can approach it but you can’t exceed it and he’d tell me about well, what the big bang was, how long ago it was. I mean just very, very basic things.  He wasn’t a scientist, but that was certainly enough to make me curious about it.

And I think the other thing that he did is that he helped me a lot with writing. Right, I still feel much, much more comfortable expressing myself in writing than I do in speaking. You can probably hear right now, I’m not the most fluid speaker in the world, and I know that. But I’m more comfortable writing. This is one reason why I like writing a blog. But from a very, very early age, he would critique my writing and say no, this is too verbose. Why are you saying it this way? You already said that before. There’s no reason to put this there. So he would make me think about how I expressed myself in writing, and I think that was also important for me.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

You have a very widely read blog. What motivates you to keep it?

If I’m writing a blog post, it doesn’t necessarily have the deepest insights, or I don’t have to spend months thinking about exactly what I’m going to say the way that I would be doing if I were writing a research paper. But, on the other hand, if I’m just like a little bit more right than the prevailing discourse that’s out there, which is really not very hard to be, then I can have a very big impact and thousands of people will read it, and orders of magnitude more people will read it than will read this research paper, even though I would have to have spent much, much more time to write the research paper. So for that reason there’s always the temptation—like if I’m writing a research paper and it’s really hard and I want to, you know, procrastinate a little—there’s always this temptation that hey, I can write a blog post and it will just take me a day and I’ll get immediate feedback, people will immediately start tweeting about it, leaving comments, reacting to it, and I have something to say [that] I think is true and that is not even all that hard for me to articulate and so it’s like an instant gratification, you know, instant small contribution to the world.

Why is your blog called Shtetl-Optimized?

I sort of always thought of myself as someone who was designed for a different era. I would read about my great-grandparents or my ancestors who would live in these shtetls in Europe, which were these Jewish villages, and it would just be obvious to everyone that studying was this very high calling where you could just do it all day long and all the other things in life would just kind of take care of themselves automatically. And I always felt like that even though that’s not exactly how it is.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

What would you be if you weren’t a scientist?

Probably some kind of writer. I’ve tried fiction writing; that’s a very difficult craft. I don’t know if I could really succeed at that. But I feel like I could be a popular science writer. I mean the other thing that I thought about when I was a teenager is going into the software industry, going into Silicon Valley. I got into computer science because I liked programming, because I wanted to make my own video games. But then, you know, it just took working on a couple of really large software projects to disabuse me of that.

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.