ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

The Presidential Medal of Freedom, America’s highest civilian honor, is usually associated with famous awardees—people like Bruce Springsteen, Stephen Hawking, and Sandra Day O’Connor. So as a computer scientist, I was thrilled to see one of this year’s awards go to a lesser-known pioneer: one Margaret Hamilton.

You might call Hamilton the founding mother of software engineering. In fact, she coined the very term. She concluded that the way forward was rigorously specified design, an approach that still underpins many modern software engineering techniques—“design by contract” and “statically typed” programming languages, for example. But not all engineers are on board with her vision. Hamilton’s approach represents just one side of a long-standing tug-of-war over the “right way” to develop software.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

When Hamilton first joined the Apollo team in 1961, after developing radar processing software for the military, she was still considered just a junior programmer: One of her early projects, a mission-abort program called “Forget it,” was seen by her team as peripheral code that would never be used. But she was a brilliant engineer with a knack for system-level thinking, and she quickly proved her mettle. By 1967, she was responsible for spearheading the development of the software that guided the Apollo missions to the moon. (Her team’s code, which amounted on paper to quite a massive stack of printouts, recently got its own moment in the spotlight in a widely shared photo.)

Margaret HamiltonWikicommons
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Along the way, Hamilton shed light on what’s needed to build large, foolproof systems. Her approach stemmed from an obsession with ridding the Apollo code of bugs. She chased down every error to discover why it happened and what lessons it offered about system design. Eventually, Hamilton’s team came to realize that the crux of their problem was their development process: They were essentially crafting a jigsaw puzzle by carving each piece separately. They would first build the individual components, and then check for incompatibilities between them—component compatibility was literally an afterthought. This inevitably led to mistakes in the interactions between modules, such as two different modules expecting to have priority at the same time, leaving the team to hope to high heaven that they caught all the problems in time for launch.

Much better, Hamilton thought, would be to start from a full system specification that kept these errors from creeping in in the first place. In the design process Hamilton envisioned, developers should start ambitious: agree on a formal mathematical description of the entire system, with well-defined couplings between the pieces at every level. Once that description for, say, a rover’s control software, was fully fleshed out, it would be automatically translated into code, which would be guaranteed to correctly implement all of the interfaces between steering control, power management, speed control, and so on. No more fixing incompatibilities; just prevent them from ever happening.

These insights came too late for the Apollo Guidance Computer. But based on her experience with Apollo, Hamilton dedicated herself to spreading her “development before the fact” approach to software engineering. Her most recent company’s products, the Universal Systems Language (USL) and the associated 001 Tool Suite, embody her engineering philosophy.

Hamilton argues that USL makes for provably correct programs, a claim which has met with much skepticism from the broader computer science community. As her critics note, all USL proves is that its programs don’t contain internal inconsistencies, which is a built-in feature of many easier-to-use programming languages (particularly “functional” ones, where a program is stated as a series of questions for the computer to answer, rather than commands to execute). “Correct” would mean the program does what it’s supposed to, but USL has no means of even describing what’s expected of the program, never mind checking to confirm it’s doing that. If a self-driving car always floors the gas at stop signs, its software may be perfectly self-consistent, but you still won’t be happy.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Nevertheless, Hamilton is not alone in favoring provable correctness. Reams of computer science research have been dedicated to “formal verification,” a rigorous way of describing desired properties of a program and ensuring it has them. For example, a traffic light controller should guarantee that no cars get stalled forever, a property which can be mathematically proven for a given algorithm. My father, a software engineer, was encouraged in college to follow the mind-bending practice of first constructing a formal proof that a program was correct—despite not having written the program yet!—and only then writing the code based on that proof.

Much as he likes to talk about it, my father does not write code this way, nor do, as far as I can tell, the vast majority of software engineers. (I have never met a software engineer who codes via formal proofs, and I’ve worked at several software companies, in addition to knowing the work habits of many friends in the software industry.) Most of us just cobble together some code and run a few tests to make sure we didn’t screw up. Often, we even deliberately use programming languages that leave the couplings between pieces of code unchecked, opening ourselves up to the same kinds of bugs that plagued Apollo despite an ever-growing roster of languages that automatically protect you from basic consistency errors. (Even companies like Google struggle to convince their engineers to spend time writing basic tests, where you manually calculate the correct output for a few different inputs, and check that it does the right thing in those cases.)

This disregard for being demonstrably correct isn’t a failure of willpower or dedication. It represents a radically different philosophy for how to build software, epitomized by Facebook’s famous motto, “Move fast and break things.” These engineers don’t want to invest precious time upfront painstakingly laying out a formal specification that then locks them into a design. What’s paramount to them is not correctness, but flexibility—the ability to throw something together as quickly as possible, and then to alter the code as experimentation demands. Some popular software engineering methodologies, such as “Extreme Programming,” don’t even have a design phase; they jump straight into building the minimal usable product, then add features incrementally.

This same tension between exacting rigor and carefree experimentation echoes through other domains, as well. In the field of artificial intelligence, early research was divided among “neats,” who wanted elegant, logical, provably correct algorithms for intelligent behavior, and “scruffies,” who would throw anything they could at a problem and see what stuck. More broadly, the scientific community struggles to balance blue-sky exploratory work against carefully planned research with pre-assessed projected outcomes. And society as a whole is debating whether innovations like self-driving cars should go straight to market and be regulated if proven harmful, or whether they should be vetted first to make sure they’re safe.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

So which approach is best? The answer depends on context. For a web app startup desperately hacking out their first beta, the top priority is being fast and nimble. But that attitude has its limits: “If cars were like software,” an old joke goes, “they’d crash twice a day for no reason, and when you called for service, they’d tell you to reinstall the engine.” When you’re programming for a nuclear reactor or a rocket, being “fast and nimble” simply won’t fly: Something as simple as an unexpectedly large number can cause a space mission to blow up. Stronger guarantees of the program’s safety are a must.

In practice, most engineers draw on elements of both approaches. Formalized mechanisms that make programming less error-prone, like labeling a program malformed if text could show up where a number is expected, are gradually propagating even into more flexible programming languages. (The shift is largely thanks to mechanisms for automatically inferring which type of data is being used where, which relieve programmers of carefully specifying what types their code expects.) Recognizing that parts of their tech have matured beyond haphazard experimentation, Facebook changed its motto a few years back to the less inspiring, “Move fast with stable infrastructure.” And although most coders don’t formalize the properties of their system before coding, it’s not uncommon to borrow from that mindset. Many take a few minutes to clarify to themselves what constraints the program should never violate, and make sure their code adheres to those constraints. For instance, a word processor might want to keep multiple users from opening a shared file at the same time. In that case, the programmer might informally convince herself that no matter how simultaneously two users request the file, her algorithm will grant a lock to only one of them.

Meanwhile, even the formal verification folks don’t verify every piece of code; that would be an unbearable burden. Instead, they structure the program with a small core that controls the rest of the system. Once that core is proven correct, they can rest easy knowing nothing too terrible can happen. They also often build ways in which programmers can circumvent the formal specification system to include less hygienic code. For example, the Federal Aviation Administration’s new airborne collision avoidance system depends on some routines that recommend adjustments for a plane’s pitch. Those routines haven’t been formally verified line by line. Still, a team of researchers was able to confirm that as long as they only suggest, say, a steep climb under particular ranges of conditions—an assumption that apparently holds true in practice—they won’t cause a collision.

There may always be some degree of friction between the Hamiltonians and the Zuckerbergians. But what seems to be winning out is a pragmatic synthesis of the two. And whatever the pros and cons of any particular development style, there’s one fact we can all agree on: Margaret Hamilton led one heck of a software project, and her Medal of Freedom is richly deserved.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Jesse Dunietz, a Ph.D. student in computer science at Carnegie Mellon University, has written for Motherboard and Scientific American Guest Blogs, among others. Follow him on Twitter @jdunietz.

Watch: Scott Aaronson, a theoretical computer scientist at MIT, explains how the computer revolutionized our understanding of information.

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.