The Transportation Security Administration will conduct a short term proof of concept in Las Vegas’ McCarran International Airport to examine how effective facial recognition technology could be at automating travelers’ identity verification, according to a recent publication from the Homeland Security Department.
For passengers who opt in, the agency will assess the technology’s capability to verify travelers’ live facial images taken at security checkpoints against the images on their identity documents.
“TSA expects that facial recognition may permit TSA personnel to focus on other critical tasks and expediting security processes—resulting in shorter lines and reduced wait times,” officials said in a privacy impact assessment regarding the proof. “Biometric matching is also expected to increase TSA’s security effectiveness by improving the ability to detect impostors.”
The agency plans to use biometrics to identify 97% of travelers flying out of the country by 2022. Last year, TSA performed an initial proof of concept, capturing real-time facial images from biometric-enabled automated electronic security gates to passengers’ e-Passports at the Los Angeles International Airport.
Instead of using automated security gates in this pilot, TSA will use a Credential Authentication Technology device with a camera, or a CAT-C device, to authenticate passengers’ identity documents. The device also will collect the image and biographic information from those documents and capture live images of passengers’ faces. The ultimate goal is to ensure that biometrics work for verifying passengers.
“To participate, passengers will voluntarily choose to enter a lane dedicated to the proof of concept,” TSA said.
Ultimately the agency plans to collect: live photos of passengers’ faces, photos from traveler documents, identification document issuance and expiration dates, travel dates, various types of identification documents, the organizations that issued their identification documents, the years of passenger’s births, as well as the gender or sex listed in the identification documents.
Fabian Rogers was none too pleased when the landlord of his rent-stabilized Brooklyn high-rise announced plans to swap out key fobs for a facial recognition system.
He had so many questions: What happened if he didn’t comply? Would he be evicted? And as a young black man, he worried that his biometric data would end up in a police lineup without him ever being arrested. Most of the building’s tenants are people of color, he said, and they already are concerned about overpolicing in their New York neighborhood.
“There’s a lot of scariness that comes with this,” said Rogers, 24, who along with other tenants is trying to legally block his management company from installing the technology.
“You feel like a guinea pig,” Rogers said. “A test subject for this technology.”
Amid privacy concerns and recent research showing racial disparities in the accuracy of facial recognition technology, some city and state officials are proposing to limit its use.
Law enforcement officials say facial recognition software can be an effective crime-fighting tool, and some landlords say it could enhance security in their buildings. But civil liberties activists worry that vulnerable populations such as residents of public housing or rent-stabilized apartments are at risk for law enforcement overreach.
“This is a very dangerous technology,” said Reema Singh Guliani, senior legislative counsel for the American Civil Liberties Union. “Facial recognition is different from other technologies. You can identify someone from afar. They may never know. And you can do it on a massive scale.”
The earliest forms of facial recognition technology originated in the 1990s, and local law enforcement began using it in 2009. Today, its use has expanded to companies such as Facebook and Apple.
Such software uses biometrics to read the geometry of faces found in a photograph or video and compare the images to a database of other facial images to find a match. It’s used to verify personal identity — the FBI, for example, has access to 412 million facial images.
“Our industry certainly needs to do a better job of helping educate the public how the technology works and how it’s used,” said Jake Parker, senior director of government relations for the Security Industry Association, a trade association based in Silver Spring, Maryland.
“Any technology has the potential to be misused,” Parker said. “But in the United States, we have a number of constitutional protections that limit what the government can do.”
Technology and connected devices touch nearly every facet of modern life, and they often hold key evidence in criminal investigations. “Every single case now involves some sort of digital evidence,” said FBI Supervisory Special Agent Steven Newman, director of the New Jersey Regional Computer Forensics Laboratory (NJRCFL).
Digital evidence can be on any device and can follow subjects almost anywhere they traverse in the cyber world. As such, digital evidence is key in Internet-enabled crimes, but it is also critical in cases that range from terrorism to fraud.
In May 2018, three New Jersey men were sentenced to prison for conspiring to provide material support to the Islamic State of Iraq and al-Sham (ISIS), which is designated by the United States as a foreign terrorist organization. The FBI became aware of the men’s activities through a tip from an informant, according to Special Agent Suzanne Walsh with the FBI’s Newark Field Office.
Once that tip was deemed credible, digital evidence became key to investigating the men’s motives. The digital evidence left on the suspects’ computers and phones was crucial to showing criminal intent in the actions the men took—from planning to travel overseas to viewing ISIS propaganda online.
Alaa Saadeh, who was 24 at the time of sentencing, was given 15 years for his crimes. The evidence investigators uncovered showed he was actively planning to join ISIS and had supported his brother’s travel, both financially and logistically, to pursue that same goal.
Alaa’s brother, Nader Saadeh, 23 at sentencing, was given 10 years; and a third man, Samuel Rahamin Topaz, 24 at sentencing, was given eight years. The evidence showed all three had viewed ISIS materials, maps, and videos, including videos that depicted executions. Their communications also contained evidence that showed their desire to join ISIS and revealed some of the efforts they took to conceal their activities.
Depending how you feel about having your privacy being violated and getting scammed, you’re not going to like this latest information about Google.
Google Maps, which so many of us use to find locations and shop for services, is corrupted with false businesses, some of them scams, according to a lengthy Wall Street Journal investigation.
And Google Chrome, the Internet browser many of us switched to because it was faster and easier to use than Internet Explorer, is so cookie-friendly that the Washington Post calls it “surveillance software.”
There are ways around this.
Google Maps
First, let’s look at Google Maps.
Let’s say you need an emergency locksmith or a garage door repair company and you search Google. A map comes up as part of the search with virtual pins.
Only some of those pins aren’t for real businesses. They’re fronts for companies that ship leads to other companies, or, worse, they’re scam companies.
If you follow The Watchdog closely, this is not news to you. Two years ago, I shared the story of Shareen Grayson of Preston Hollow who unknowingly invited a convicted thief in to fix her freezer.
She found him on Google. A leads company had hijacked the phone number of a legitimate appliance business and passed it on to the thief.
Sad to say that two years later, Google hasn’t shut this scam down.
“The scams are profitable for nearly everyone involved,” the Wall Street Journal reports. “Google included. Consumers and legitimate businesses end up the losers.”
WSJ calls this “chronic deceit.”
Hundreds of thousands of false listings are posted to Google Maps and accompanying ads each month, the newspaper found.
Police and security forces around the world are testing out automated facial recognition systems as a way of identifying criminals and terrorists. But how accurate is the technology and how easily could it and the artificial intelligence (AI) it is powered by – become tools of oppression?
Imagine a suspected terrorist setting off on a suicide mission in a densely populated city centre. If he sets off the bomb, hundreds could die or be critically injured.
CCTV scanning faces in the crowd picks him up and automatically compares his features to photos on a database of known terrorists or “persons of interest” to the security services.
The system raises an alarm and rapid deployment anti-terrorist forces are despatched to the scene where they “neutralise” the suspect before he can trigger the explosives. Hundreds of lives are saved. Technology saves the day.
But what if the facial recognition (FR) tech was wrong? It wasn’t a terrorist, just someone unlucky enough to look similar. An innocent life would have been summarily snuffed out because we put too much faith in a fallible system.
What if that innocent person had been you?
This is just one of the ethical dilemmas posed by FR and the artificial intelligence underpinning it.
Training machines to “see” – to recognise and differentiate between objects and faces – is notoriously difficult. Computer vision, as it is sometimes called – not so long ago was struggling to tell the difference between a muffin and a chihuahua – a litmus test of this technology.
Agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement have turned state driver’s license databases into a facial-recognition gold mine, scanning through millions of Americans’ photos without their knowledge or consent, newly released documents show.
Thousands of facial-recognition requests, internal documents and emails over the past five years, obtained through public-records requests by researchers with Georgetown Law’s Center on Privacy and Technology and provided to The Washington Post, reveal that federal investigators have turned state departments of motor vehicles databases into the bedrock of an unprecedented surveillance infrastructure.
Police have long had access to fingerprints, DNA and other “biometric data” taken from criminal suspects. But the DMV records contain the photos of a vast majority of a state’s residents, most of whom have never been charged with a crime.
Neither Congress nor state legislatures have authorized the development of such a system, and growing numbers of Democratic and Republican lawmakers are criticizing the technology as a dangerous, pervasive and error-prone surveillance tool.
“Law enforcement’s access of state databases,” particularly DMV databases, is “often done in the shadows with no consent,” House Oversight Committee Chairman Elijah E. Cummings (D-Md.) said in a statement to The Post.
The tech entrepreneur Ross McNutt wants to spend three years recording outdoor human movements in a major U.S. city, KMOX news radio reports.
If that sounds too dystopian to be real, you’re behind the times. McNutt, who runs Persistent Surveillance Systems, was inspired by his stint in the Air Force tracking Iraqi insurgents. He tested mass-surveillance technology over Compton, California, in 2012. In 2016, the company flew over Baltimore, feeding information to police for months (without telling city leaders or residents) while demonstrating how the technology works to the FBI and Secret Service.
The goal is noble: to reduce violent crime.
There’s really no telling whether surveillance of this sort has already been conducted over your community as private and government entities experiment with it. If I could afford the hardware, I could legally surveil all of Los Angeles just for kicks.
And now a billionaire donor wants to help Persistent Surveillance Systems to monitor the residents of an entire high-crime municipality for an extended period of time––McNutt told KMOX that it may be Baltimore, St. Louis, or Chicago.
McNutt’s technology is straightforward: A fixed-wing plane outfitted with high-resolution video cameras circles for hours on end, recording everything in large swaths of a city. One can later “rewind” the footage, zoom in anywhere, and see exactly where a person came from before or went after perpetrating a robbery or drive-by shooting … or visiting an AA meeting, a psychiatrist’s office, a gun store, an abortion provider, a battered-women’s shelter, or an HIV clinic. On the day of a protest, participants could be tracked back to their homes.
In the timely new book Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All, the author Arthur Holland Michel talks with people working on this category of technology and concludes, “Someday, most major developed cities in the world will live under the unblinking gaze of some form of wide-area surveillance.”
At first, he says, the sheer amount of data will make it impossible for humans in any city to examine everything that is captured on video. But efforts are under way to use machine learning and artificial intelligence to “understand” more. “If a camera that watches a whole city is smart enough to track and understand every target simultaneously,” he writes, “it really can be said to be all-seeing.”
The “arms race” of mobile forensics – ever-tougher encryption and the breakneck operations to crack it – has become more of a public tug-of-war than ever before.
Cellebrite, the largest player in the mobile-forensics industry, unveiled its UFED Premium last Friday. Along with the announcement came the bombshell: that it can now get into any Apple iOS device, and many of the high-end Android devices.
“An exclusive solution for law enforcement to unlock and extract data from all iOS and Android devices,” the company said in a tweet.
Those devices have historically been the toughest to crack – and Cellebrite’s newfound ability to perform a full-file system extraction on any iOS device in particular would allow law enforcement “to get much more data than what is possible through logical extractions and other conventional means.”
“Our certified forensic experts can also help you gain access to sensitive mobile evidence form several locked, encrypted or damaged iOS and Android devices using advanced in-lab only techniques,” the company added in its Friday announcement.
The latest tool works on Apple device running anything from iOS 7 to iOS 12.3, according to the company. Among the Android devices covered are the Samsung S6, S7, S8, and S9. Also supported are the most popular models of Motorola, Huawei, LG and Xiaomi.
The announcement follows the highly-publicized breakthrough of the GrayKey devices made by Grayshift more than a year ago. The GrayKey tool had exploited a low-power loophole in some iOS systems, one expert explained to Forensic Magazine. But Apple put in a fix to stop the access late last year, involving an iOS system to reconnect with a home device. Since then, GrayKey has made some inroads on some Apple devices – but not all of them, according to experts.
Nearly half a million Alabama cell phone numbers received identical text messages in 2015 telling them to click a link to “verify” their bank account information. The link took recipients to a realistic-looking bank website where they typed in their personal financial information.
But the link was not the actual bank’s website—it was part of a phishing scam. Just like phishing messages sent over email, the text message-based scam was easy to fall for. The web address was only one character off from the bank’s actual web address.
While most recipients appeared to ignore the message, around 50 people clicked on the link and provided their personal information. The website asked for account numbers, names, and ZIP codes, along with their associated debit card numbers, security codes, and PINs. Within an hour, the fraudster had made himself debit cards with the victims’ account information. He then began to withdraw money from various ATMs, stealing whatever the daily ATM maximum was from each account.
“It was a fairly legitimate-looking website, other than the information it was asking for,” said Special Agent Jake Frith of the Alabama Attorney General’s Office, who worked the case along with investigators from the FBI’s Mobile Field Office.
The fraudster, Iosif Florea, stole about $18,000 (including ATM fees), with losses from each individual account ranging from $20 to $800. (Banks typically reimburse customers who are victims of fraud.)
Investigators believe Florea bought a large list of cell phone numbers from a marketing company, and he only needed a few victims out of thousands of phone numbers for the scheme to be successful.
The damage was minimized, however, because of the bank’s quick response. As soon as customers reported the fraud, the bank reached out to federal authorities as well as the local media to alert the community to the fraudulent messages.
A New York school district has finished installing a facial recognition system intended to spot potentially dangerous intruders, but state officials concerned about privacy say they want to know more before the technology is put into use.
Education Department spokeswoman Emily DeSantis said Monday that department employees plan to meet with Lockport City School officials about the system being tested this week. In the meantime, she said, the district has said it will not use facial recognition software while it checks other components of the system.
The rapidly developing technology has made its way into airports, motor vehicle departments, stores and stadiums, but is so far rare in public schools.
Lockport is preparing to bring its system online as cities elsewhere are considering reining in the technology’s use. San Francisco in May became the first U.S. city to ban its use by police and other city departments and Oakland is among others considering similar legislation.
A bill by Democrat Assembly Member Monica Wallace would create a one-year moratorium on the technology’s use in New York schools to allow lawmakers time to review it and draft regulations. The legislation is pending.
Lockport Superintendent Michelle Bradley, on the district’s website, said the district’s initial implementation of the system this week will include adjusting cameras mounted throughout the buildings and training staff members who will monitor them from a room in the high school. The system is expected to be fully online on Sept. 1.