Most of us secure our digital lives with passwords — hopefully different, strong passwords for each service we use. But the inherent weaknesses of passwords are becoming more apparent: this week, Evernote reset 50 million passwords after a breach, and that’s just the latest in a series of high-profile password-related gaffes from Twitter, Sony, Dropbox, Facebook, and others.

Isn’t security a problem that biometrics can solve? Biometric technologies allow access based on unique and immutable characteristics about ourselves — ideally something that can’t be stolen or faked. Although it’s often the stuff of spy movies, big businesses, governments, and even parts of academia have been using biometric authentication like fingerprints, voice recognition, and even facial scans to authenticate users for years.

So why aren’t the rest of us using these tools to guard our devices, apps, and data? It can’t be that hard … can it?

The myth of the fingerprints

For centuries (maybe even millennia), fingerprints have been used to verify identity, and fingerprint scanners have been options in mainstream business computers for about a decade. Typically, users swipe a finger over a narrow one-dimensional scanner, and the system compares the data to an authorized user. The process of scanning and matching is complex, but as the technology has evolved, accuracy has improved and costs have come down.

But fingerprint readers have downsides. The most obvious are injuries like burns and cuts — imagine being locked out of your phone or computer for a week because a potholder slipped. Stains, ink, sunscreen, moisture, dirt, oils, and even lotion can interfere with fingerprint readers, and some fingerprints just can’t be scanned easily. I’m personally a good example — parts of my fingertips are worn smooth (or blistered) from playing instruments, but lots of people who work with their hands often have thin ridges (or none at all) on their fingers. Trained law-enforcement personnel could tease prints off me if I get hauled down to county, but my luck with fingerprint readers in notebooks is abysmal, and I can’t imagine using one on a phone — outside in the rain, since I live in Seattle.

So far, there’s been only one mainstream smartphone with a fingerprint reader — the Motorola Atrix, which mostly went nowhere along with Motorola’s Webtop technology. But things might be changing: for instance, Apple’s recent acquisition of Authentec last year has fueled speculation future iPhones and iOS devices will offer fingerprint recognition, and long-time fingerprint tech firm Ultra-Scan says they have an ultrasonic reader 100 times more accurate than anything on the market.

“I believe the shift to using fingerprints in consumer electronics is set to happen very fast,” wrote Vance Bjorn, CTO of access management firm Digital Persona. “Passwords are widely regarded as the weakest link of security, and they are becoming even less convenient as consumers need to type them in on smartphones and via touch screens. The use of a fingerprint for authentication solves both problems — security and convenience.”

Your voice is your password

Voice authentication seems well-suited to smartphones: They’re already designed to handle the human voice, and include technologies like noise filtering and signal processing. Just as with fingerprint readers, voice authentication technology has existed for years in high-security facilities, but hasn’t broken into mainstream consumer electronics.

Approaches to speaker identification vary, but they all have to handle variations in speech, background noise, and differences in temperature and air pressure. Some systems compare speakers with pre-recorded phrases from an authorized user, like a PIN number. More sophisticated systems perform complex calculations to work out the acoustic characteristics of a speaker’s vocal tract, and even compare speech patterns and style to determine if someone is (literally) who they say they are.

The first approach is brittle, and can lock out users who have a cold or cough. They’re also vulnerable to recordings. Reformed hacker Kevin Mitnick has spoken of tricking a financial firm’s CEO into saying the numbers zero through nine, recording the numbers, and using them to bypass the bank’s phone-based voice authentication.

More sophisticated approaches can be computationally intensive. These days, the heavy lifting is often done at the receiving end — which means your biometric data gets transferred, often with no protection at all.

“Existing voice technology uses the equivalent of plaintext passwords,” said Manas Pathak, a researcher in privacy-preserving voice authentication who recently completed his Ph.D. at Carnegie Mellon University. “You’re giving part of your identity to the system.”

A voiceprint stored on a remote system can be stolen just like a password file. Moreover, voice data alone can reveal our gender and nationality — even our age and emotional state.

Recent developments aim to work around these problems. A new open standard being developed by the FIDO Alliance would support multiple authentication methods, but biometric data would never leave users’ devices. Pathak and other researchers have developed a sophisticated system that creates a stable representation of a user’s voice (talking about anything), divides it up into similar samples, then cryptographically protects them before performing any comparison. The remote systems never see users’ biometric data, and it can’t be recreated.

“We have about 95 percent accuracy,” said Pathak. “We would need to install it on more phones with wider deployment and testing. That’s outside the realm of academic research, but it’s ready for commercialization.”

Still, it’s hard to discount noise and environmental factors. Traffic noise, conversation, televisions, music, and other sounds can all reduce accuracy. There are times voice recognition is simply not practical: Imagine someone on a bus or subway shouting at their phone to unlock it.

Face the face

Face recognition and iris scans are other common forms of biometric authentication — and could make sense for consumer electronics, since almost all our phones and tablets have high-resolution cameras.

Facial recognition systems work by noting the size, shape, and distances between landmarks on a face, like the eyes, jaw, nose, and cheekbones. Some systems also consider elements like wrinkles and moles, while some high-end gear constructs 3D models — those work even with profile views.

Most of us have seen face recognition technology on Facebook and in applications like Apple’s iPhoto — and the results are pretty uneven. Commercial face recognition systems are more robust, but they struggle with the same things that can make Facebook’s efforts laughable: crappy photos. Bad lighting, glasses, smiles, goofy expressions, hats, and even haircuts can cause problems. Even the best facial recognition systems struggle with angled images, and people’s faces can change radically with age, weight, medical conditions, and injury.

“A few years ago we had a high-end facial recognition system failing a senior engineer almost every time,” said a security coordinator for a Boston-area firm who didn’t want to be identified. “Why? He looks like a mad scientist, complete with thick glasses, crazy hair, and full beard. But I failed the same system myself after eye surgery when I wore a patch for a couple weeks.”

Iris recognition applies pattern matching technologies to the texture (not color) of a user’s iris, usually with a little help from infrared light.

Iris patterns are probably as unique as fingerprints — and they’re generally much more stable. Moreover, matching iris patterns doesn’t require tons of processing power and (in good conditions) has very low false-match rates.

For years, iris recognition technology was largely locked up by key patents held by Iridian, but those patents expired a few years ago and the field has seen significant new development. However, much of it has been aimed at government-funded identification programs, rather than consumer applications.

Iris recognition also has pitfalls. Users would likely have to hold a device closed to their face, with decent lighting and little to no motion. Most eyewear would have to be removed, and some drugs and medications can deform an iris pattern by dilating or constricting pupils — try passing an iris scan following an eye exam.

Just as voice authentication can be vulnerable to recordings, iris scanners can be fooled by quality photographs, or even contact lenses printed with fake irises. As a result, right now the technology is mostly used in human-supervised situations — like immigration and passport control — rather than automated systems.

Securing ourselves

There’s no doubt passwords are an increasingly feeble way to secure our digital lives, and biometric authentication technologies can lock things down using something we are rather than just something we know. However, none of these technologies are a magic bullet for digital security: They all fail for some people some of the time, and they all carry risks and vulnerabilities. Moreover, if biometric information is ever compromised, there may be no going back. After all, you can change your password, but good luck changing your thumbs.

Nonetheless, biometric technologies seem poised to move into consumer electronics soon, most likely in multi-factor systems offering a layer of security in addition to passwords.

“Passwords will never go away — ‘what you know’ will remain a critical tool for security,” noted Digital Persona’s Vance Bjorn. “But I do see the day soon where that tool is no longer viewed as sufficient for most services consumers or employees access.”

View Source