Border Patrol expands fingerprinting of migrant children

HOUSTON (AP) — U.S. border authorities say they’ve started to increase the biometric data they take from children 13 years old and younger, including fingerprints, despite privacy concerns and government policy intended to restrict what can be collected from migrant youths.

A Border Patrol official said this week that the agency had begun a pilot program to collect the biometrics of children with the permission of the adults accompanying them, though he did not specify where along the border it has been implemented.

The Border Patrol also has a “rapid DNA pilot program” in the works, said Anthony Porvaznik, the chief patrol agent in Yuma, Arizona, in a video interview published by the Epoch Times newspaper.

Spokesmen for the Border Patrol and the Department of Homeland Security did not return several messages from The Associated Press seeking comment on both programs.

The Border Patrol says that in the last year, it’s stopped roughly 3,100 adults and children fraudulently posing as families so they can be released into the U.S. quickly rather than face detention or rapid deportation.

The Department of Homeland Security has also warned of “child recycling,” cases where they say children allowed into the U.S. were smuggled back into Central America to be paired up again with other adults in fake families — something they say is impossible to catch without fingerprints or other biometric data.

“Those are kids that are being rented, for lack of a better word,” Porvaznik said.

But the Border Patrol has not publicly identified anyone arrested in a “child recycling” scheme or released data on how many such schemes have been uncovered. Advocates say they’re worried that in the name of stopping fraud, agents might take personal information from children that could be used against them later.

“Of course child trafficking exists,” said Karla Vargas, an attorney with the Texas Civil Rights Project. But she warned against implementing “a catch-all” policy that could reduce the rights of people who are legally seeking asylum.

Read More

School’s Plan for Facial Recognition System Raises Concerns

A New York school district has finished installing a facial recognition system intended to spot potentially dangerous intruders, but state officials concerned about privacy say they want to know more before the technology is put into use.

Education Department spokeswoman Emily DeSantis said Monday that department employees plan to meet with Lockport City School officials about the system being tested this week. In the meantime, she said, the district has said it will not use facial recognition software while it checks other components of the system.

The rapidly developing technology has made its way into airports, motor vehicle departments, stores and stadiums, but is so far rare in public schools.

Lockport is preparing to bring its system online as cities elsewhere are considering reining in the technology’s use. San Francisco in May became the first U.S. city to ban its use by police and other city departments and Oakland is among others considering similar legislation.

A bill by Democrat Assembly Member Monica Wallace would create a one-year moratorium on the technology’s use in New York schools to allow lawmakers time to review it and draft regulations. The legislation is pending.

Lockport Superintendent Michelle Bradley, on the district’s website, said the district’s initial implementation of the system this week will include adjusting cameras mounted throughout the buildings and training staff members who will monitor them from a room in the high school. The system is expected to be fully online on Sept. 1.

Read More

New Forensic Technology Examines Fingerprints Once Considered Too Old

CARLISLE, Pa. (WHTM) - New technology allows investigators to examine fingerprints that were once considered too old or compromised to analyze.

A vacuum metal deposition instrument is now in the hands of Cumberland County to better collect fingerprints and DNA. This equipment is only the second of its kind in Pennsylvania and one of 14 in the entire country.

“Gold will deposit on the substrate and then I will run zinc, and zinc doesn’t attach anywhere else except to a different metal and then it will attach to the gold that I’ve placed,” said Carol McCandless, the lead forensic investigator for Cumberland County.

The vacuum sucks all the air and water out of a chamber, then the machine coats the evidence with a very thin metal film under a high vacuum, all done in less than five minutes.

“The metallic substances don’t land on the top of the ridges of anything. It goes in between so that the top of the ridge is touched and that’s where the DNA is,” said Skip Ebert, Cumberland County District Attorney.

This machine locates fingerprints from items that were previously tough or impossible to extract before, things like paper, waxy substances, and clothing.

“What we did in this machine of the actual victim’s face being suffocated and on the opposite side of the pillowcase, the actual hands that were pushing it down on his face. You cannot beat that kind of evidence anywhere,” said Ebert.

This not only helps current and future cases.

“I have received several cold cases, one from 1983 and one from 1995,” said McCandless.

The Cumberland County Forensic Lab is expected to be fully accredited in the fall, thanks largely to this new technology, made possible through a grant from the Pennsylvania Commission on Crime and Delinquency.

View Source

US government to give DNA tests at border to check for fraud

WASHINGTON (AP) — U.S. Immigration and Customs Enforcement will begin voluntary DNA testing in cases where officials suspect that adults are fraudulently claiming to be parents of children as they cross the U.S.-Mexico border.

The decision comes as Homeland Security officials are increasingly concerned about instances of child trafficking as a growing number of Central American families cross the border, straining resources to the breaking point. Border authorities also recently started to increase the biometric data they take from children 13 and younger, including fingerprints, despite privacy concerns and government policy that restricts what can be collected.

Officials with the Department of Homeland Security wouldn’t say where the DNA pilot program would begin, but they did say it would start as early as next week and would be very limited.

The DNA testing will happen once officials with U.S. Customs and Border Protection, which manages border enforcement, refer possible instances of fraud to ICE, which usually manages interior enforcement. But teams from ICE’s Homeland Security Investigations were recently deployed to the southern border to help investigate human smuggling efforts.

The rapid DNA test will take about two hours and will be obtained using a cheek swab from both the adult and child. The parent is to swab the child, officials said.

The testing will be destroyed and won’t be used as evidence in any criminal case, they said.

Generally, government officials determine a family unit to be a parent with a child. Fraud would occur if a person is claiming to be a parent when he or she is another type of relative, or if there was no relationship at all. ICE officials said they have identified 101 possible instances of fraudulent families since April 18 and determined one-third were fraudulent.

Since the beginning of the budget year, they say they have uncovered more than 1,000 cases and referred 45 cases for prosecution. The fraud could also include the use of false birth certificates or documents, and adults accused of fraud aren’t necessarily prosecuted for it; some are prosecuted for illegal entry or other crimes.

Homeland Security officials have also warned of “child recycling,” cases where they say children allowed into the U.S. were smuggled back into Central America to be paired up again with other adults in fake families — something they say is impossible to catch without fingerprints or other biometric data.

Read More

Half a Face Enough for Recognition Technology

Facial recognition technology works even when only half a face is visible, researchers from the University of Bradford have found.

Using artificial intelligence techniques, the team achieved 100 percent recognition rates for both three-quarter and half faces. The study, published in Future Generation Computer Systems, is the first to use machine learning to test the recognition rates for different parts of the face.

Lead researcher, Professor Hassan Ugail from the University of Bradford said: “The ability humans have to recognise faces is amazing, but research has shown it starts to falter when we can only see parts of a face. Computers can already perform better than humans in recognising one face from a large number, so we wanted to see if they would be better at partial facial recognition as well.”

The team used a machine learning technique known as a “convolutional neural network,” drawing on a feature extraction model called VGG—one of the most popular and widely used for facial recognition.

They worked with a dataset containing multiple photos—2,800 in total—of 200 students and staff from FEI University in Brazil, with equal numbers of men and women.

For the first experiment, the team trained the model using only full facial images They then ran an experiment to see how well the computer was able to recognize faces, even when shown only part of them. The computer recognized full faces 100 percent of the time, but the team also had 100 percent success with three-quarter faces and with the top or right half of the face. However, the bottom half of the face was only correctly recognized 60 percent of the time, and eyes and nose on their own just 40 percent.

They then ran the experiment again, after training the model using partial facial images as well. This time, the scores significantly improved for the bottom half of the face, for eyes and nose on their own and even for faces with no eyes and nose visible, achieving around 90 percent correct identification.

Individual facial parts, such as the nose, cheek, forehead or mouth had low recognition rates in both experiments.

Read More

Your face is your passport – Facial Recognition

Australia is a bloody long way from the rest of the world. Fly from Los Angeles to Sydney and you’ll be in the air for 13 hours. Tack on five more if you’re starting in New York. And if you’re coming from London, your feet won’t touch the ground for about a day.

The point being, by the time you land in Australia, you’ll be sick of traveling. You’ll want to get out of the airport and to the country’s excellent beaches as quickly as possible.

That’s why Australia’s Department of Home Affairs is at the forefront of smart border control technology. In 2007, the border agency introduced SmartGates, which read your passport, scan your face and verify who you are at the country’s eight major international airports. Built by Portugal’s Vision-Box, the gates get you out of the airport and into Australia with minimum fuss.

Australia wants to make that process even faster.

During May and June 2017, the country tested the world’s first “contactless” immigration technology at Canberra International Airport. The passport-free facial recognition system confirms a traveller’s identity by matching his or her face against stored data. A second trial is set to start in Canberra soon.

Biometrics aren’t just being used at border control. Sydney Airport has announced it’s teaming up with Qantas, Australia’s largest airline, to use facial recognition to simplify the departure process.

Under a new trial, passengers on select Qantas international flights can have their face and passport scanned at a kiosk when they check in. From then on, they won’t need to present their passport to Qantas staff — they’ll be able to simply scan their face at a kiosk when they drop off luggage, enter the lounge and board their flight at the gate. Travellers will still need to go through regular airport security and official immigration processing, but all of their dealings with Qantas can be handled with facial recognition.

Read More

Heartbeats may be the keys of the future

Biometric identifiers, in one form or another, have been a part of the security industry for some time. While most biometric access control solutions use a fingerprint or an iris scan to identify an individual, Toronto-based Bionym is taking a unique approach to the market with a newly launched solution called the Nymi. Unlike other biometric devices that make the user submit to a physical read of their finger or eye, the Nymi is a wearable authentication device that uses a person’s heartbeat to verify their identity.

According to Karl Martin, co-founder and CEO of Bionym, the idea of using someone’s heartbeat as a way to uniquely identify them goes back nearly 40 years. Over the past 10 years, however, he said that research groups around the world have been working to develop automated robot systems that could use electrocardiograms (ECGs) as a biometric. Researchers at the University of Toronto, including Bionym co-founder and CTO Foteini Agrafioti, recently made a breakthrough by finding an automated way of extracting features that relate to the shape of a heart wave that are unique to each person, explained Martin.

“It was a very robust method that could work in the real world. A lot of the other research in the area, they used methods that involved finding very specific points on the wave and looking at relative measures between those points. It’s very unreliable,” said Martin “The method at the University of Toronto looked at the overall shape and was not as sensitive to things like noise, which you see in real life. By looking at the overall shape and unique algorithms to extract those features, it was found that you could have a relatively reliable way to recognize people using a real world ECG signal.”

Martin, who along with Agrafioti worked on biometric, security and cryptography technologies as doctoral students at the University of Toronto, said they founded Bionym as a way to commercialize their work.

“We decided there was an opportunity to make a more complete solution with our technology,” he said. “We looked at what was happening with wearable technology and we realized that’s what we had with biometric recognition using the heart. It married very well with wearable technology and we could essentially create this new kind of product that was an authenticator that you wear rather than something embedded in a mobile phone, tablet or computer.”

Although other promising biometric technologies and companies have made a splash in the security industry only to flame out a short time later, Martin believes that the approach his company is taking sets it apart from others.

“We’re really driven by our vision, which is to enable a really seamless user experience in a way that is still very secure. So many of the security products and the biometric technologies out there – it’s almost kind of like a solution looking for a problem,” said Martin. “Somebody comes up with a new method and says, ‘oh, we can use it like this,’ but the question is what really new are you enabling? In many cases, you’re talking about access control – whether it’s physical or logical access control. Fingerprint is still sort of the most common because it’s robust, people know it, they understand it, but the other technologies haven’t really brought anything new to the table. What we’re doing with this technology and bringing something new to the table is it’s not so much in the core technology itself using the ECG, it’s the marriage of that technology in a wearable form factor.”

Because the Nymi is wearable, Martin said that identity can be communicated wirelessly in a simpler, more convenient way than what’s previously been available.

“The person only has to do something when they put the device on, so they put it on, they become authenticated and then they can essentially forget about it,” he added. “We’ve had a somewhat consumer focus because we are very focused on a convenient user experience, but we found that we actually were able to achieve almost that Holy Grail, which is convenience plus security.”

Read More


Most of us secure our digital lives with passwords — hopefully different, strong passwords for each service we use. But the inherent weaknesses of passwords are becoming more apparent: this week, Evernote reset 50 million passwords after a breach, and that’s just the latest in a series of high-profile password-related gaffes from Twitter, Sony, Dropbox, Facebook, and others.

Isn’t security a problem that biometrics can solve? Biometric technologies allow access based on unique and immutable characteristics about ourselves — ideally something that can’t be stolen or faked. Although it’s often the stuff of spy movies, big businesses, governments, and even parts of academia have been using biometric authentication like fingerprints, voice recognition, and even facial scans to authenticate users for years.

So why aren’t the rest of us using these tools to guard our devices, apps, and data? It can’t be that hard … can it?

The myth of the fingerprints

For centuries (maybe even millennia), fingerprints have been used to verify identity, and fingerprint scanners have been options in mainstream business computers for about a decade. Typically, users swipe a finger over a narrow one-dimensional scanner, and the system compares the data to an authorized user. The process of scanning and matching is complex, but as the technology has evolved, accuracy has improved and costs have come down.

But fingerprint readers have downsides. The most obvious are injuries like burns and cuts — imagine being locked out of your phone or computer for a week because a potholder slipped. Stains, ink, sunscreen, moisture, dirt, oils, and even lotion can interfere with fingerprint readers, and some fingerprints just can’t be scanned easily. I’m personally a good example — parts of my fingertips are worn smooth (or blistered) from playing instruments, but lots of people who work with their hands often have thin ridges (or none at all) on their fingers. Trained law-enforcement personnel could tease prints off me if I get hauled down to county, but my luck with fingerprint readers in notebooks is abysmal, and I can’t imagine using one on a phone — outside in the rain, since I live in Seattle.

So far, there’s been only one mainstream smartphone with a fingerprint reader — the Motorola Atrix, which mostly went nowhere along with Motorola’s Webtop technology. But things might be changing: for instance, Apple’s recent acquisition of Authentec last year has fueled speculation future iPhones and iOS devices will offer fingerprint recognition, and long-time fingerprint tech firm Ultra-Scan says they have an ultrasonic reader 100 times more accurate than anything on the market.

“I believe the shift to using fingerprints in consumer electronics is set to happen very fast,” wrote Vance Bjorn, CTO of access management firm Digital Persona. “Passwords are widely regarded as the weakest link of security, and they are becoming even less convenient as consumers need to type them in on smartphones and via touch screens. The use of a fingerprint for authentication solves both problems — security and convenience.”

Your voice is your password

Voice authentication seems well-suited to smartphones: They’re already designed to handle the human voice, and include technologies like noise filtering and signal processing. Just as with fingerprint readers, voice authentication technology has existed for years in high-security facilities, but hasn’t broken into mainstream consumer electronics.

Approaches to speaker identification vary, but they all have to handle variations in speech, background noise, and differences in temperature and air pressure. Some systems compare speakers with pre-recorded phrases from an authorized user, like a PIN number. More sophisticated systems perform complex calculations to work out the acoustic characteristics of a speaker’s vocal tract, and even compare speech patterns and style to determine if someone is (literally) who they say they are.

The first approach is brittle, and can lock out users who have a cold or cough. They’re also vulnerable to recordings. Reformed hacker Kevin Mitnick has spoken of tricking a financial firm’s CEO into saying the numbers zero through nine, recording the numbers, and using them to bypass the bank’s phone-based voice authentication.

More sophisticated approaches can be computationally intensive. These days, the heavy lifting is often done at the receiving end — which means your biometric data gets transferred, often with no protection at all.

“Existing voice technology uses the equivalent of plaintext passwords,” said Manas Pathak, a researcher in privacy-preserving voice authentication who recently completed his Ph.D. at Carnegie Mellon University. “You’re giving part of your identity to the system.”

A voiceprint stored on a remote system can be stolen just like a password file. Moreover, voice data alone can reveal our gender and nationality — even our age and emotional state.

Recent developments aim to work around these problems. A new open standard being developed by the FIDO Alliance would support multiple authentication methods, but biometric data would never leave users’ devices. Pathak and other researchers have developed a sophisticated system that creates a stable representation of a user’s voice (talking about anything), divides it up into similar samples, then cryptographically protects them before performing any comparison. The remote systems never see users’ biometric data, and it can’t be recreated.

“We have about 95 percent accuracy,” said Pathak. “We would need to install it on more phones with wider deployment and testing. That’s outside the realm of academic research, but it’s ready for commercialization.”

Still, it’s hard to discount noise and environmental factors. Traffic noise, conversation, televisions, music, and other sounds can all reduce accuracy. There are times voice recognition is simply not practical: Imagine someone on a bus or subway shouting at their phone to unlock it.

Face the face

Face recognition and iris scans are other common forms of biometric authentication — and could make sense for consumer electronics, since almost all our phones and tablets have high-resolution cameras.

Facial recognition systems work by noting the size, shape, and distances between landmarks on a face, like the eyes, jaw, nose, and cheekbones. Some systems also consider elements like wrinkles and moles, while some high-end gear constructs 3D models — those work even with profile views.

Most of us have seen face recognition technology on Facebook and in applications like Apple’s iPhoto — and the results are pretty uneven. Commercial face recognition systems are more robust, but they struggle with the same things that can make Facebook’s efforts laughable: crappy photos. Bad lighting, glasses, smiles, goofy expressions, hats, and even haircuts can cause problems. Even the best facial recognition systems struggle with angled images, and people’s faces can change radically with age, weight, medical conditions, and injury.

“A few years ago we had a high-end facial recognition system failing a senior engineer almost every time,” said a security coordinator for a Boston-area firm who didn’t want to be identified. “Why? He looks like a mad scientist, complete with thick glasses, crazy hair, and full beard. But I failed the same system myself after eye surgery when I wore a patch for a couple weeks.”

Iris recognition applies pattern matching technologies to the texture (not color) of a user’s iris, usually with a little help from infrared light.

Iris patterns are probably as unique as fingerprints — and they’re generally much more stable. Moreover, matching iris patterns doesn’t require tons of processing power and (in good conditions) has very low false-match rates.

For years, iris recognition technology was largely locked up by key patents held by Iridian, but those patents expired a few years ago and the field has seen significant new development. However, much of it has been aimed at government-funded identification programs, rather than consumer applications.

Iris recognition also has pitfalls. Users would likely have to hold a device closed to their face, with decent lighting and little to no motion. Most eyewear would have to be removed, and some drugs and medications can deform an iris pattern by dilating or constricting pupils — try passing an iris scan following an eye exam.

Just as voice authentication can be vulnerable to recordings, iris scanners can be fooled by quality photographs, or even contact lenses printed with fake irises. As a result, right now the technology is mostly used in human-supervised situations — like immigration and passport control — rather than automated systems.

Securing ourselves

There’s no doubt passwords are an increasingly feeble way to secure our digital lives, and biometric authentication technologies can lock things down using something we are rather than just something we know. However, none of these technologies are a magic bullet for digital security: They all fail for some people some of the time, and they all carry risks and vulnerabilities. Moreover, if biometric information is ever compromised, there may be no going back. After all, you can change your password, but good luck changing your thumbs.

Nonetheless, biometric technologies seem poised to move into consumer electronics soon, most likely in multi-factor systems offering a layer of security in addition to passwords.

“Passwords will never go away — ‘what you know’ will remain a critical tool for security,” noted Digital Persona’s Vance Bjorn. “But I do see the day soon where that tool is no longer viewed as sufficient for most services consumers or employees access.”

View Source

Disruptions: Smart Guns Can’t Kill in the Wrong Hands

Gun owners and advocates are fond of saying, “Guns don’t kill people, people kill people.”

This might be a more useful aphorism: Smart-guns don’t kill the wrong people.

Technology exists, or could exist, that would make guns safer. The idea of a safe gun might seem to be the ultimate oxymoron: guns are designed to kill. But something missing from the gun-control debate that has followed the killing of 20 children and six adults at an elementary school in Newtown, Conn., is the role of technology in preventing or at least limiting gun deaths.

Biometrics and grip pattern detection can sense the registered owner of a gun and allow only that person to fire it. For example, the iGun, made by Mossberg Group, cannot be fired unless its owner is wearing a ring with a chip that activates the gun.

But you would be hard pressed to find this technology on many weapons sold in stores. “The gun industry has no interest in making smart-guns. There is no incentive for them,” said Robert J. Spitzer, a professor of political science at SUNY Cortland and the author of four books on gun policy. “There is also no appetite by the government to press ahead with any kind of regulation requiring smart-guns.”

Why can we open our front doors with our iPhones and have cars that drive themselves, but we can’t make a gun that doesn’t fire unless its registered owner is using it?

“We can,” Dr. Spitzer said. “These safety options exist today. This is not Buck Rogers type of stuff.” But gun advocates are staunchly against these technologies, partly because so many guns are bought not in gun shops, but in private sales. “Many guns are bought and sold on the secondary market without background checks, and that kind of sale would be inhibited with fingerprinting-safety technologies in guns,” he said.

I called several major gun makers and the National Rifle Association. No one thinks a smart-gun will stop a determined killer. But I thought Smith & Wesson and Remington, for instance, would want to discuss how technology might help reduce accidental shootings, which killed 600 people and injured more than 14,000 in the United States in 2010. The gunmakers did not respond, and neither did the N.R.A.

A Wired magazine article from 2002 gives a glimpse of the N.R.A.’s thinking. “Mere mention of ‘smart-gun’ technology elicited sneers and snickers faster than a speeding bullet,” the magazine wrote. It quoted the N.R.A.’s executive vice president, Wayne LaPierre, as saying, “Tragic victims couldn’t have been saved by trigger locks or magazine bans or ‘smart-gun’ technology, or some new government commission running our firearms companies.”

After the massacre at Sandy Hook Elementary School in Newtown in December, Mr. LaPierre created a new aphorism: “The only thing that stops a bad guy with a gun is a good guy with a gun.” He said violent video games and movies were part of the problem, but he didn’t mention smart-guns as a solution.

TriggerSmart, an Irish company, has patented a childproof smart-gun. One feature is a “safe zone” that can be installed in schools and acts as a force field, disabling any TriggerSmart gun that enters a designated area. Robert McNamara, the company’s founder, has been trying to persuade gun makers to adopt the technology. He isn’t having much luck. “One gun manufacturer told us if we put this technology in one particular gun and some kid gets shot with another gun, then they will have to put them in all guns,” he said.

“We believe we could have helped prevent the Newtown massacre.”

View Source

Bay Street law firm uses fingerprint technology to monitor employees’ comings and goings

The days of sneaking out for three-hour lunch breaks will soon be over at a Bay Street law firm after it decided to install fingerprint-scanning technology to monitor its employees’ whereabouts.

Last month, McCague Borlack LLP announced plans for a revamped security system that will require staff (except lawyers who spend much of their time with clients) to clock in and out of the office with a finger swipe, keeping track of morning late-comers or those who try to jump-start their weekends by slipping out early on a Friday.

“Some people were abusing the system,” said founding partner Howard Borlack, 58. “We had people taking two to three hours for lunch and we had no way of knowing. . . . Some people were complaining.”

Other Toronto firms use security passes and honour systems to keep track of time worked. McCague Borlack, which focuses mostly on insurance law and employs about 200 people, has gone a step further with a system that not only provides office access via fingerprint, but also records employees as they enter and leave.

Come mid-November, when the system is expected to go live, the office will be equipped with finger-scanning machines supplied by Utah-based Qqest, Inc. that will keep a rolling record of the time spent in the office.

Read More