This is part one of a three-part series about the movie industry’s switch to digital cameras and what is lost, and gained, in the process. Part two runs tomorrow; part three runs on Friday.
Cinema is a blend of art and technology, working together to capture light, one frame at a time, to create the illusion of motion. Sometimes the captured light of cinema amounts to an aesthetic revolution, as with the deep-focus cinematography of Gregg Toland in Orson Welles’ landmark Citizen Kane, the spectacular wide-screen landscapes shot by Freddie Young in Lawrence of Arabia, or the super-slo-mo “bullet time” cinematography of The Matrix. Sometimes it’s the apex of simplicity, a technique so transparent it appears almost artless, as in the black-and-white compositions of Robert Bresson or the free-form streetscapes of Richard Linklater’s Slacker.
For 100 years and more, while the technique and style of cinema evolved and varied immensely, its underlying scientific and technological basis remained virtually unchanged: the seductive grain of the film image, the whir of the projector, the organic flow of light into the camera and onto the screen. But over the course of the last decade, filmmaking has undergone a technical revolution. Most Hollywood movies, and for that matter, most movies made anywhere in the world, are no longer shot on photographic film, but made with digital cameras and recorded as bytes and pixels, ones and zeroes, through a process that appears transparent but some filmmakers find distressingly mysterious.
Indeed, most commercial mainstream cinema is now a digital process from beginning to end, from the set to the editing suite to the projection booth at your neighborhood multiplex. What you see on the screen is no longer a light bulb shining through a strip of 35mm film, but the output of a concatenation of files called a Digital Cinema Package, or DCP, delivered to the theater on a hard drive or securely downloaded from the Internet. If film as a medium is not quite dead, it’s definitely on the endangered list.
It’s not easy to find a historical parallel for this. It’s almost as if the oil-based paints that emerged in the Middle Ages, and have remained the dominant, high-prestige medium in the visual arts ever since, had suddenly been replaced with some entirely different medium. Great artists would still produce great work, of course. But could it still be considered “painting”? Many filmmakers and cinematographers, the hands-on visionaries of light, have embraced the ease, crispness, and immediacy of digital media as a technological and artistic great leap forward, while others believe that a craft that had reached a highly attuned level of precision over the last few decades is being jettisoned in the name of needless novelty.
From the Lumière brothers to the Coen brothers, the technical process of filmmaking had remained essentially the same: Rolls of photographic film—originally this was a base of celluloid, one of the first plastics, coated with an emulsion of silver halide grains suspended in gelatin—were run mechanically through a camera at a set speed, exposing frames to focused light one at a time. Hours or days later, the exposed film would be developed in a lab, yielding (generally speaking) a negative from which a positive print could be struck. Only then could the filmmakers see the results.
Given the expense of mounting professional film productions, and the difficulty or impossibility of correcting mistakes once the film had been shot, the cinematographer assumed an almost priest-like significance. He or she—and it was almost always a he—relied on both quasi-objective standards like light-meter readings and subjective criteria of judgment and expertise to make all sorts of technical and artistic decisions: focal length, depth of field, and choice of lenses and filters.
It has become standard practice to refer to a motion picture’s director as its sole author, but everyone in the business understands the importance of the cinematographer, who is much more than a hired hand. Without Toland’s revolutionary in-camera effects—low-angle shots and “pan-focus” technique, which allowed objects as close as 15 inches from the camera and as far away as 200 feet to remain in focus in the same shot—Citizen Kane would not now be understood as the high-water mark of American film. Ingmar Bergman’s status as the great cinematic poet of 1960s angst had everything to do with the extraordinary close-ups of the human face shot by Sven Nykvist.
When digital video (DV) was introduced as a commercial product in 1986, any prospect that it would one day replace film as the principal cinematic medium would have sounded like science fiction. DV gradually became the standard format for television news, and then, as the equipment became less expensive, replaced the venerable videotape in consumer cameras. Inevitably, DV began to leak into the art world and the lower fringes of cinema, and filmmakers began to understand its advantages: There was no film to develop, and the resulting footage—that word now rendered entirely metaphorical—could be viewed immediately. Takes could continue as long as they needed to, and software applications soon emerged, including the immensely successful Final Cut Pro, that allowed one to edit anything from a two-minute music video to a two-hour feature film on an ordinary PC.
There was only one problem: It looked like video…
This is part one of a three-part series. Part two, running tomorrow, explains why digital images look different from filmed images and how that changes the nature of digital movies. Part three, running on Friday, looks at how technology and tradition combine to make modern films.
Andrew O’Hehir is a senior writer at Salon.com.