The Transportation Security Administration will conduct a short term proof of concept in Las Vegas’ McCarran International Airport to examine how effective facial recognition technology could be at automating travelers’ identity verification, according to a recent publication from the Homeland Security Department.
For passengers who opt in, the agency will assess the technology’s capability to verify travelers’ live facial images taken at security checkpoints against the images on their identity documents.
“TSA expects that facial recognition may permit TSA personnel to focus on other critical tasks and expediting security processes—resulting in shorter lines and reduced wait times,” officials said in a privacy impact assessment regarding the proof. “Biometric matching is also expected to increase TSA’s security effectiveness by improving the ability to detect impostors.”
The agency plans to use biometrics to identify 97% of travelers flying out of the country by 2022. Last year, TSA performed an initial proof of concept, capturing real-time facial images from biometric-enabled automated electronic security gates to passengers’ e-Passports at the Los Angeles International Airport.
Instead of using automated security gates in this pilot, TSA will use a Credential Authentication Technology device with a camera, or a CAT-C device, to authenticate passengers’ identity documents. The device also will collect the image and biographic information from those documents and capture live images of passengers’ faces. The ultimate goal is to ensure that biometrics work for verifying passengers.
“To participate, passengers will voluntarily choose to enter a lane dedicated to the proof of concept,” TSA said.
Ultimately the agency plans to collect: live photos of passengers’ faces, photos from traveler documents, identification document issuance and expiration dates, travel dates, various types of identification documents, the organizations that issued their identification documents, the years of passenger’s births, as well as the gender or sex listed in the identification documents.
Fabian Rogers was none too pleased when the landlord of his rent-stabilized Brooklyn high-rise announced plans to swap out key fobs for a facial recognition system.
He had so many questions: What happened if he didn’t comply? Would he be evicted? And as a young black man, he worried that his biometric data would end up in a police lineup without him ever being arrested. Most of the building’s tenants are people of color, he said, and they already are concerned about overpolicing in their New York neighborhood.
“There’s a lot of scariness that comes with this,” said Rogers, 24, who along with other tenants is trying to legally block his management company from installing the technology.
“You feel like a guinea pig,” Rogers said. “A test subject for this technology.”
Amid privacy concerns and recent research showing racial disparities in the accuracy of facial recognition technology, some city and state officials are proposing to limit its use.
Law enforcement officials say facial recognition software can be an effective crime-fighting tool, and some landlords say it could enhance security in their buildings. But civil liberties activists worry that vulnerable populations such as residents of public housing or rent-stabilized apartments are at risk for law enforcement overreach.
“This is a very dangerous technology,” said Reema Singh Guliani, senior legislative counsel for the American Civil Liberties Union. “Facial recognition is different from other technologies. You can identify someone from afar. They may never know. And you can do it on a massive scale.”
The earliest forms of facial recognition technology originated in the 1990s, and local law enforcement began using it in 2009. Today, its use has expanded to companies such as Facebook and Apple.
Such software uses biometrics to read the geometry of faces found in a photograph or video and compare the images to a database of other facial images to find a match. It’s used to verify personal identity — the FBI, for example, has access to 412 million facial images.
“Our industry certainly needs to do a better job of helping educate the public how the technology works and how it’s used,” said Jake Parker, senior director of government relations for the Security Industry Association, a trade association based in Silver Spring, Maryland.
“Any technology has the potential to be misused,” Parker said. “But in the United States, we have a number of constitutional protections that limit what the government can do.”
Technology and connected devices touch nearly every facet of modern life, and they often hold key evidence in criminal investigations. “Every single case now involves some sort of digital evidence,” said FBI Supervisory Special Agent Steven Newman, director of the New Jersey Regional Computer Forensics Laboratory (NJRCFL).
Digital evidence can be on any device and can follow subjects almost anywhere they traverse in the cyber world. As such, digital evidence is key in Internet-enabled crimes, but it is also critical in cases that range from terrorism to fraud.
In May 2018, three New Jersey men were sentenced to prison for conspiring to provide material support to the Islamic State of Iraq and al-Sham (ISIS), which is designated by the United States as a foreign terrorist organization. The FBI became aware of the men’s activities through a tip from an informant, according to Special Agent Suzanne Walsh with the FBI’s Newark Field Office.
Once that tip was deemed credible, digital evidence became key to investigating the men’s motives. The digital evidence left on the suspects’ computers and phones was crucial to showing criminal intent in the actions the men took—from planning to travel overseas to viewing ISIS propaganda online.
Alaa Saadeh, who was 24 at the time of sentencing, was given 15 years for his crimes. The evidence investigators uncovered showed he was actively planning to join ISIS and had supported his brother’s travel, both financially and logistically, to pursue that same goal.
Alaa’s brother, Nader Saadeh, 23 at sentencing, was given 10 years; and a third man, Samuel Rahamin Topaz, 24 at sentencing, was given eight years. The evidence showed all three had viewed ISIS materials, maps, and videos, including videos that depicted executions. Their communications also contained evidence that showed their desire to join ISIS and revealed some of the efforts they took to conceal their activities.
Depending how you feel about having your privacy being violated and getting scammed, you’re not going to like this latest information about Google.
Google Maps, which so many of us use to find locations and shop for services, is corrupted with false businesses, some of them scams, according to a lengthy Wall Street Journal investigation.
And Google Chrome, the Internet browser many of us switched to because it was faster and easier to use than Internet Explorer, is so cookie-friendly that the Washington Post calls it “surveillance software.”
There are ways around this.
Google Maps
First, let’s look at Google Maps.
Let’s say you need an emergency locksmith or a garage door repair company and you search Google. A map comes up as part of the search with virtual pins.
Only some of those pins aren’t for real businesses. They’re fronts for companies that ship leads to other companies, or, worse, they’re scam companies.
If you follow The Watchdog closely, this is not news to you. Two years ago, I shared the story of Shareen Grayson of Preston Hollow who unknowingly invited a convicted thief in to fix her freezer.
She found him on Google. A leads company had hijacked the phone number of a legitimate appliance business and passed it on to the thief.
Sad to say that two years later, Google hasn’t shut this scam down.
“The scams are profitable for nearly everyone involved,” the Wall Street Journal reports. “Google included. Consumers and legitimate businesses end up the losers.”
WSJ calls this “chronic deceit.”
Hundreds of thousands of false listings are posted to Google Maps and accompanying ads each month, the newspaper found.
Police and security forces around the world are testing out automated facial recognition systems as a way of identifying criminals and terrorists. But how accurate is the technology and how easily could it and the artificial intelligence (AI) it is powered by – become tools of oppression?
Imagine a suspected terrorist setting off on a suicide mission in a densely populated city centre. If he sets off the bomb, hundreds could die or be critically injured.
CCTV scanning faces in the crowd picks him up and automatically compares his features to photos on a database of known terrorists or “persons of interest” to the security services.
The system raises an alarm and rapid deployment anti-terrorist forces are despatched to the scene where they “neutralise” the suspect before he can trigger the explosives. Hundreds of lives are saved. Technology saves the day.
But what if the facial recognition (FR) tech was wrong? It wasn’t a terrorist, just someone unlucky enough to look similar. An innocent life would have been summarily snuffed out because we put too much faith in a fallible system.
What if that innocent person had been you?
This is just one of the ethical dilemmas posed by FR and the artificial intelligence underpinning it.
Training machines to “see” – to recognise and differentiate between objects and faces – is notoriously difficult. Computer vision, as it is sometimes called – not so long ago was struggling to tell the difference between a muffin and a chihuahua – a litmus test of this technology.
Agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement have turned state driver’s license databases into a facial-recognition gold mine, scanning through millions of Americans’ photos without their knowledge or consent, newly released documents show.
Thousands of facial-recognition requests, internal documents and emails over the past five years, obtained through public-records requests by researchers with Georgetown Law’s Center on Privacy and Technology and provided to The Washington Post, reveal that federal investigators have turned state departments of motor vehicles databases into the bedrock of an unprecedented surveillance infrastructure.
Police have long had access to fingerprints, DNA and other “biometric data” taken from criminal suspects. But the DMV records contain the photos of a vast majority of a state’s residents, most of whom have never been charged with a crime.
Neither Congress nor state legislatures have authorized the development of such a system, and growing numbers of Democratic and Republican lawmakers are criticizing the technology as a dangerous, pervasive and error-prone surveillance tool.
“Law enforcement’s access of state databases,” particularly DMV databases, is “often done in the shadows with no consent,” House Oversight Committee Chairman Elijah E. Cummings (D-Md.) said in a statement to The Post.
A New York school district has finished installing a facial recognition system intended to spot potentially dangerous intruders, but state officials concerned about privacy say they want to know more before the technology is put into use.
Education Department spokeswoman Emily DeSantis said Monday that department employees plan to meet with Lockport City School officials about the system being tested this week. In the meantime, she said, the district has said it will not use facial recognition software while it checks other components of the system.
The rapidly developing technology has made its way into airports, motor vehicle departments, stores and stadiums, but is so far rare in public schools.
Lockport is preparing to bring its system online as cities elsewhere are considering reining in the technology’s use. San Francisco in May became the first U.S. city to ban its use by police and other city departments and Oakland is among others considering similar legislation.
A bill by Democrat Assembly Member Monica Wallace would create a one-year moratorium on the technology’s use in New York schools to allow lawmakers time to review it and draft regulations. The legislation is pending.
Lockport Superintendent Michelle Bradley, on the district’s website, said the district’s initial implementation of the system this week will include adjusting cameras mounted throughout the buildings and training staff members who will monitor them from a room in the high school. The system is expected to be fully online on Sept. 1.
It’s 3 a.m. Do you know what your iPhone is doing?
Mine has been alarmingly busy. Even though the screen is off and I’m snoring, apps are beaming out lots of information about me to companies I’ve never heard of. Your iPhone probably is doing the same — and Apple could be doing more to stop it.
On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.
And all night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address -— once every five minutes.
Our data has a secret life in many of the devices we use every day, from talking Alexa speakers to smart TVs. But we’ve got a giant blind spot when it comes to the data companies probing our phones.
You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise.
IPhone apps I discovered tracking me by passing information to third parties — just while I was asleep — include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.
And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic. According to privacy firm Disconnect, which helped test my iPhone, those unwanted trackers would have spewed out 1.5 gigabytes of data over the span of a month. That’s half of an entire basic wireless service plan from AT&T.
“This is your data. Why should it even leave your phone? Why should it be collected by someone when you don’t know what they’re going to do with it?” says Patrick Jackson, a former National Security Agency researcher who is chief technology officer for Disconnect. He hooked my iPhone into special software so we could examine the traffic. “I know the value of data, and I don’t want mine in any hands where it doesn’t need to be,” he told me.
When it was revealed last month that a team of Amazon workers were tasked with listening to and reviewing Echo customers’ recordings—including those that customers never intended to record—the news sparked a flurry of criticism and concern regarding what this meant for the average consumer’s privacy.
At the same time, many were left unsurprised. Previous incidents, such as when an Amazon customer in Germany accidentally received someone else’s private Alexa recordings last year, have shown not only that the devices can record when least expected (such as when the user is in the shower, or having a private conversation) but also that these recordings can end up in unexpected hands.
This reality can leave users feeling that the device that helps them control their schedule, their music and even their home appliances isn’t completely within their control. In fact, the Echo can even be used against its owner—and may have the potential to send some users to prison.
As explained by Oxygen Forensics COO Lee Reiber in an interview with Forensic Magazine, when you live with an Alexa device, “it’s almost like your room is bugged.” Of course the “almost” is that Alexa isn’t necessarily always recording, but that doesn’t mean it only records when it’s supposed to either.
“We have a sample Alexa (…) that I utilize to do research on, and there is a lot of information on there. And I found several (recordings) that are specifically marked by Amazon as an error,” said Reiber, who has firsthand experience using Oxygen’s digital forensic tools to extract data from Echo devices. “I’m sitting there in my kitchen and I am talking to my wife, and it’s recording that information.”
Echo devices are meant to record what the user says to it after using a “wake word”—either “Echo,” “Amazon,” “computer” or the classic “Alexa,” depending on what the user prefers. The catch is that Alexa, which always has its microphone on listening for that word, has a habit of mishearing other words or sounds as its wake word, causing it to activate and record the voices or noises that follow.
I’ve noticed this with my own Echo Dot device, which sometimes lights up blue on its own, or startles me with a robotic “I’m sorry, I didn’t catch that” when I never said anything to begin with. Reiber also said those kitchen conversations with his wife were recorded without permission from a wake word, and plenty of other users have reported similar experiences with accidentally waking up their all-hearing assistant.
As Reiber explained, Amazon typically marks unintentional recordings as an error, and in forensic tools like Oxygen’s extractor, they show up marked as discarded items, similar to files someone has deleted from their phone or computer but are still there in the device’s memory. And like these unseen “deleted” files that any skilled digital examiner can recover and view, those accidental recordings are still available to investigators in full—and have the potential to become valuable forensic evidence in a case.
“Because they are already recording, any of these types of IoT (internet of things) devices can be tremendous, because again, if it’s still listening, it could record, and the quality is fantastic,” said Reiber, who also has a law enforcement background. “It’s just a great recording of the person who’s actually speaking. So, someone could say, ‘Well, it wasn’t me, it wasn’t me talking.’ Well, no, it is, it’s an exact recording of your voice.”
In a world where an organization’s trade secrets can be compromised with a few clicks, identifying whether or not intellectual property (IP) theft took place can be a complex process for many reasons.
Since many IP theft perpetrators are internal staff, asking internal IT staff to investigate may uncover issues of bias or conflicts of interest. Additionally, IT staff may not have the experience or training necessary to properly preserve the evidence gathered. Relying upon an experienced digital forensics firm will address both of these complexities given their expertise and unbiased third-party standing.
The virtual nature of digital assets simplify the IP theft process and also complicate any investigation into wrongdoing. Plus, these analyses cannot be understood within the standard criminal investigation framework. All gathered materials should be shared with a digital forensic specialist. What the forensic analyst is trying to determine is whether the materials have probative value (i.e., possessing relevance for the case in question). Digital forensics is a unique way to handle the potential IP theft investigations.
Preservation is a key principle in IP theft investigations just as it is with any other crime scene: everything ideally stays as it was at the time of the crime, as indicated by security training firm, the InfoSec Institute. Access to all devices should be stopped and all access should be blocked when IP theft is first suspected or discovered. Experienced analysts then systematically categorize and collect data to better understand whether a crime occurred. Key materials can be damaged or destroyed if someone without a forensics background attempts to access the digital evidence. If someone intrudes without proper credentials, the evidence is essentially contaminated which may lead to halted investigations, lost lawsuits, and the failure to return the IP to the rightful owner.