Last year, the United States Customs and Border Protection rolled out a recognition pilot program that uses biometric recognition tools like face and iris scanners. The program will snag “imposters” using a fake passport at airports, and what’s more, reduce wait times at security checkpoints. But what might identify individuals even more conclusively and speed travelers on their way more swiftly is another kind of biometrics, based on the ear.
Scientists have taken note that the curves of the cartilage, the protrusions of the auricle, and the hollow of the concha cava are all, like fingerprints, features distinctive to each person. The way noise bounces within their folds allows the ears to guarantee a highly accurate identification of who we are, Steve Beeby, professor of Electronic Systems and Devices at University of Southampton, told the Telegraph back in 2009. The sounds are “different from person to person,” he said, “which gives us a really nice biometric tool” in a wide range of fields, from computer logins to bank accounts.
After some years of development and refinement, biometric tech able to achieve this was finally revealed by the multinational Nippon Electric Company, Limited (renamed NEC Corporation in 1983) a Japanese tech giant. In March, the company announced that, by 2018, it would commercialize its biometric ear buds, which may present a few advantages over other methods. For one, it’s a passive identification system—ear biometrics don’t require as much from the subject, in terms of correctly positioning their eye or finger, compared to retina and fingerprint recognition. Instead, it conveniently measures the reflection of sound waves off the tympanic membrane—a thin, cone-shaped tissue that, in humans, separates the ear canal from the middle ear, forming the eardrum.
“NEC’s method seems to listen to the reflected sound from a sound impulse. The reflected sound depends upon the internal shape of the ear and therefore, they claim, can be used to distinguish between individuals,” Beeby says. “This is subtly different from our research. We used the same hardware to input sound in the ear but were interested in the sound that is automatically made by the ear known as an otoacoustic emission”—sound, inaudible to the ear, emitted by the ear’s vibrating hair cells. They generate a faintly different noise from person to person, says Beeby. They’re “unique to individuals but subject to change over time.”
In order to capture sound vibrations, NEC’s earphone is equipped with a built-in microphone that generates and collects sound waves as they echo within the ear cavity. The speaker produces a “few hundred milliseconds of acoustic signals,” ranging from zero Hertz to 22 kiloHertz, audible to humans, and the earphone receives the signals transmitted within the ear cavity through its microphone. Through a method called “synchronous addition,” say researchers at the company’s lab, the device mitigates the interference of noisy environments and the inherent noise of each signal it generates. By calculating the “average of the waveforms of the multiple signals received,” they say, the device eliminates any turbulence produced during the sound’s transmission—things like heartbeats, muscle and joint movements, breathing, vocal sounds—and analyzes the sound’s reverberation in the ear “within approximately one second.”
The algorithms that run traditional biometric security software, based on iris and facial recognition, sometimes fail to identify certain facial traits.
NEC didn’t consider Beeby’s research on otoacoustics while developing the new technology because such emissions are “extremely faint” and “quite difficult to capture by ordinary audio devices,” says Takafumi Koshinaka, a researcher leading NEC’s speech and audio biometrics Research & Development. “Rather, our research originated from a virtual reality technology that makes binaural audio signals”—sound recorded with two microphones, to replicate human hearing—“more realistic, as if sound objects in the signals are coming from various directions in a 3D space.”
Biometric security measurements based on ear- and sound-waves recognition could address concerns about the security and reliability of fingerprints: We leave our fingerprints everywhere—at supermarkets, on the subway, at the gym—and virtually anyone could collect them and potentially use them to unlock the devices that use fingerprints, like iPhones and office buildings. Fingerprints are also less reliable since they deteriorate with time and people can alter or purposefully damage their fingers’ line patterns.
Furthermore, ear-scanning devices’ small size, portability, and fast processing make them possibly one of the most efficient and effective recognition techniques on the market. Iris scanners and facial recognition software, by contrast, need to be run by much bulkier authentication machines. What’s more, the algorithms that run traditional biometric security software, based on iris and facial recognition, sometimes fail to identify certain facial traits, particularly among people of African and Asian descent, as well as people from the L.G.B.T. community.
This failure has been partly attributed to the fact that facial features—because of facial hair and hair styling, cosmetics, and facial expression—are subject more than other body parts, like the ear, to change over time. Another factor could also be “the inherent bias of facial recognition” software. Computers have been programmed, for example, to read and recognize a narrow range of variables—certain types of skin tone, a limited amount of eye and nose shapes—as compared to the broader, more comprehensive variety of features that actually exist.
Biometric authentication based on noise and sound waves could resolve this issue. Shigeki Yamagata, general manager of the Information and Media Processing Laboratories at NEC, said in a company statement that the new method enables “a natural way of conducting continuous authentication, even during movement and while performing work, simply by wearing an earphone” as opposed to scanning part of the body.
Sima Taheri, who has a doctorate in computer science from the University of Maryland and is now working at a facial intelligence company that employs 3D facial scanning, concurs. Successful authentication on personal devices, she says, cannot always be guaranteed over time: Once a fingerprint unlocks an iPhone, for instance, the iPhone has no way to verify that its user hasn’t changed. A continuous identification performed through sound signals and recognition could address that, she says.
Lucia De Stefani, a multimedia reporter who lives between New York and Italy, has written for TIME LightBox, Vogue.it, Hyperallergic, and LLNYC. Follow her on Twitter and Instagram.
The lead photograph is courtesy of Naika Lieva via Flickr.