Evaluating the Use of Automated Facial Recognition Technology

Academics at Cardiff University have conducted the first independent academic evaluation of Automated Facial Recognition (AFR) technology across a variety of major policing operations.

The project by the Universities’ Police Science Institute evaluated South Wales Police’s deployment of Automated Facial Recognition across several major sporting and entertainment events in Cardiff city over more than a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals.

The study found that while AFR can enable police to identify persons of interest and suspects where they would probably not otherwise have been able to do so, considerable investment and changes to police operating procedures are required to generate consistent results.

Researchers employed a number of research methods to develop a rich picture and systematically evaluate the use of AFR by police across multiple operational settings. This is important as previous research on the use of AFR technologies has tended to be conducted in controlled conditions. Using it on the streets and to support ongoing criminal investigations introduces a range of factors impacting the effectiveness of AFR in supporting police work.

The technology works in two modes: Locate is the live, real-time application that scans faces within CCTV feeds in an area. It searches for possible matches against a pre-selected database of facial images of individuals deemed to be persons of interest by the police.

Identify, on the other hand, takes still images of unidentified persons (usually captured via CCTV or mobile phone camera) and compares these against the police custody database in an effort to generate investigative leads. Evidence from the research found that in 68 percent of submissions made by police officers in the Identify mode, the image was not of sufficient quality for the system to work.

Over the period of the evaluation, however, the accuracy of the technology improved significantly and police got better at using it. The Locate system was able to correctly identify a person of interest around 76 percent of the time. A total of 18 arrests were made in ‘live Locate deployments during the evaluation, and in excess of 100 people were charged following investigative searches during the first 8-9 months of the AFR Identify operation (end of July 2017-March 2018).

The report suggests that it is more helpful to think of AFR in policing as ‘Assisted Facial Recognition’ rather than a fully ‘Automated Facial Recognition’ system. ‘Automated’ implies that the identification process is conducted solely by an algorithm, when in fact, the system serves as a decision-support tool to assist human operators in making identifications. Ultimately, decisions about whether a person of interest and an image match are made by police operators. It is also deployed in uncontrolled environments, and so is impacted by external factors including lighting, weather and crowd flows.

Read More

Study Details Link Between Social Media and Sex Trafficking

Social media is increasingly being exploited to contact, recruit and sell children for sex, according to a study by The University of Toledo Human Trafficking and Social Justice Institute.

The study, which was requested by the Ohio Attorney General’s Human Trafficking Commission, reveals how traffickers quickly target and connect with vulnerable children on the Internet through social media.

“It is vitally important to educate parents, professionals and youth – especially our middle school or teenage daughters who may be insecure – about the dangers of online predatory practices used by master manipulators,” said Dr. Celia Williamson, UT professor of social work and director of the UT Human Trafficking and Social Justice Institute. “Through this outreach and education, we can help save children from becoming victims of modern-day slavery.”

“We know predators are using the internet to find their victims, and this eye-opening study highlights what a predator looks for in a victim and helps parents recognize the signs that their child may be a target,” Ohio Attorney General Mike DeWine said. “Using real-life examples, this study provides valuable information that parents can use to start open and honest conversations with their children about staying safe online.”

Through a series of 16 in-depth interviews by the institute’s staff and student interns with knowledgeable members of Ohio law enforcement, judges, direct service providers, advocates and researchers who engaged with victims who were trafficked online, the study outlines how traffickers connect to vulnerable youth online, groom the children to form quicker relationships, avoid detection, and move the connections from online to in-person.

“The transition from messaging to meeting a trafficker in person is becoming less prevalent,” Williamson said. “As technology is playing a larger role in trafficking, this allows some traffickers to be able to exploit youth without meeting face-to-face. Social media helps to mask traditional cues that alert individuals to a potentially dangerous person.”

Williamson cites a 2018 report that says while 58 percent of victims eventually meet their traffickers face to face, 42 percent who initially met their trafficker online never met their trafficker in person and were still trafficked.

The experts, whose identities are not being released, said the traffickers educate themselves by studying what the victim posts on commonly used view-and-comment sites such as Facebook, Instagram or SnapChat, as well as dating apps such as Tinder, Blendr and Yellow, or webcam sites like Chatroulette and Monkey, in order to build trust.

“These guys, they learn about the girls and pretend to understand them, and so these girls, who are feeling not understood and not loved and not beautiful … these guys are very good at sort of pretending that they are all of these things and they really understand them and, ‘I know how you feel, you are beautiful,’ and just filling the hole that these girls are feeling,” said a professional contributing to the study.

Read More

Feds Can’t Force You To Unlock Your iPhone With Finger Or Face

A California judge has ruled that American cops can’t force people to unlock a mobile phone with their face or finger. The ruling goes further to protect people’s private lives from government searches than any before and is being hailed as a potentially landmark decision.

Previously, U.S. judges had ruled that police were allowed to force unlock devices like Apple’s iPhone with biometrics, such as fingerprints, faces or irises. That was despite the fact feds weren’t permitted to force a suspect to divulge a passcode. But according to a ruling uncovered by Forbes, all logins are equal.

The order came from the U.S. District Court for the Northern District of California in the denial of a search warrant for an unspecified property in Oakland. The warrant was filed as part of an investigation into a Facebook extortion crime, in which a victim was asked to pay up or have an “embarassing” video of them publicly released. The cops had some suspects in mind and wanted to raid their property. In doing so, the feds also wanted to open up any phone on the premises via facial recognition, a fingerprint or an iris.

While the judge agreed that investigators had shown probable cause to search the property, they didn’t have the right to open all devices inside by forcing unlocks with biometric features.

On the one hand, magistrate judge Kandis Westmore ruled the request was “overbroad” as it was “neither limited to a particular person nor a particular device.”

But in a more significant part of the ruling, Judge Westmore declared that the government did not have the right, even with a warrant, to force suspects to incriminate themselves by unlocking their devices with their biological features. Previously, courts had decided biometric features, unlike passcodes, were not “testimonial.” That was because a suspect would have to willingly and verbally give up a passcode, which is not the case with biometrics. A password was therefore deemed testimony, but body parts were not, and so not granted Fifth Amendment protections against self-incrimination.

Read More

Members of APT 10 Group Targeted Intellectual Property and Confidential Information

Two Chinese men have been charged in a massive, years-long hacking campaign that stole personal and proprietary information from companies around the world, the FBI and the Justice Department announced at a press conference today in Washington, D.C.

The men, Zhu Hua and Zhang Shilong, are part of a group known as Advanced Persistent Threat 10, or APT 10, a hacking group associated with the Chinese government. A New York grand jury indicted the pair for conspiracy to commit computer intrusion, conspiracy to commit wire fraud, and aggravated identity theft. The indictment was unsealed today.

According to the indictment, from around 2006 to 2018, APT 10 conducted extensive hacking campaigns, stealing information from more than 45 victim organizations, including American companies. Hundreds of gigabytes of sensitive data were secretly taken from companies in a diverse range of industries, such as health care, biotechnology, finance, manufacturing, and oil and gas.

FBI Director Christopher Wray described the list of companies, not named in the indictment, as a “Who’s Who” of the global economy. Even government agencies like NASA and the Department of Energy were among the victims. The hack is part of China’s ongoing efforts to steal intellectual property from other countries.

“Healthy competition is good for the global economy. Criminal conduct is not. Rampant theft is not. Cheating is not,” Wray said at the press conference.

APT 10 used “spear phishing” techniques to introduce malware onto targeted computers. The hackers sent emails that appeared to be from legitimate addresses but contained attachments that installed a program to secretly record all keystrokes on the machine, including user names and passwords. The group also targeted managed service providers (MSPs), companies that remotely manage their clients’ servers and networks. MSP hacks allowed APT 10 members to indirectly gain access to confidential data of numerous companies who were the clients of the MSPs.

Read More

Facebook shuts down 1 million accounts per day but can’t stop all ‘threats

Menlo Park California Aug 26 2017Facebook turns off more than 1 million accounts a day as it struggles to keep spam, fraud and hate speech off its platform, its chief security officer says.

Still, the sheer number of interactions among its 2 billion global users means it can’t catch all “threat actors,” and it sometimes removes text posts and videos that it later finds didn’t break Facebook rules, says Alex Stamos.

“When you’re dealing with millions and millions of interactions, you can’t create these rules and enforce them without (getting some) false positives,” Stamos said during an onstage discussion at an event in San Francisco on Wednesday evening.

Stamos blames the pure technical challenges in enforcing the company’s rules — rather than the rules themselves — for the threatening and unsafe behavior that sometimes finds its way on to the site.

Facebook has faced critics who say its rules for removing content are too arbitrary and make it difficult to know what types of activity it will and won’t allow.

Political leaders in Europe this year have accused it of being too lax in allowing terrorists to use Facebook to recruit and plan attacks, while a U.S. Senate committee last year demanded to know its policies for removing fake news stories, after accusations it was arbitrarily removing posts by political conservatives.

Free speech advocates have also criticized its work.

“The work of (Facebook) take-down teams is not transparent,” said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, which advocates for free speech online.

“The rules are not enforced across the board. They reflect biases,” says Galperin, who shared the stage with Stamos at a public event that was part of Enigma Interviews, a series of cybersecurity discussions sponsored by the Advanced Computing Systems Association, better known as USENIX.

Stamos pushed back during the discussion, saying “it’s not just a bunch of white guys” who make decisions about what posts to remove.

Read More

Los Angeles to Screen Transit Passengers With Body Scanners

Los Angeles CA Aug 15 2018 Los Angeles’s transit agency said Tuesday that it would become the first in the nation to screen its passengers with body scanners as they enter the public transit system — a bold effort to keep riders safer from terrorism and other evolving threats.

But officials said that riders need not worry that their morning commute would turn into the sort of security nightmare often found at airports or even sporting events. In a statement released Tuesday, transit officials said the portable screening devices they plan to deploy later this year will “quickly and unobtrusively” screen riders without forcing them to line up or stop walking.

“We’re looking specifically for weapons that have the ability to cause a mass casualty event,” Alex Wiggins, the chief security and law enforcement officer for the Los Angeles County Metropolitan Transportation Authority, said Tuesday, according to The Associated Press. “We’re looking for explosive vests, we’re looking for assault rifles. We’re not necessarily looking for smaller weapons that don’t have the ability to inflict mass casualties.”

The devices themselves resemble the sort of black laminate cases that musicians lug around on tour — not upright metal detectors. Dave Sotero, a spokesman for Metro, said the machines, which are on wheels, can detect suspicious items from 30 feet away and can scan more than 2,000 passengers per hour. The units can be pointed in the direction of riders as they come down an escalator or into a station.

“Most people won’t even know they’re being scanned, so there’s no risk of them missing their train service on a daily basis,” he said.

Mr. Sotero said the agency had purchased several of the units for about $100,000 each, but he would not specify exactly how many. He said that the authorities still needed to be trained on how to use the technology.

The county’s metro system has one of the largest riderships in the country, with 93 rail stations alone — and it is set to expand. Mr. Sotero said the new scanning units would be mostly deployed at random stations, but would certainly be used at major transit hubs and in places were large crowds are expected for marches, races and other events.

“There won’t be a deployment pattern that will be predictable,” he said. “They will go where they’re needed.”

Read More

Two malls are using facial recognition technology to track shoppers’

At least two Calgary malls are using facial recognition technology to track shoppers’ ages and genders without first notifying them or obtaining their explicit consent.

A visitor to Chinook Centre in south Calgary spotted a browser window that had seemingly accidentally been left open on one of the mall’s directories, exposing facial-recognition software that was running in the background of the digital map. They took a photo and posted it to the social networking site Reddit on Tuesday.

The mall’s parent company, Cadillac Fairview, said the software, which they began using in June, counts people who use the directory and predicts their approximate age and gender, but does not record or store any photos or video from the directory cameras.

Cadillac Fairview said the software is also used at Market Mall in northwest Calgary, and other malls nationwide.

“We don’t require consent, because we’re not capturing or retaining images,” a Cadillac Fairview spokesperson said.

The software could, for example, say approximately how many men in their 60s used the directory, but not store images of those men’s faces or collect any other biometric data, the spokesperson said.

Instead, they said the data is used in aggregate to understand directory usage patterns to “create a better shopper experience.”

The use of facial recognition software in retail spaces is becoming commonplace to analyze shopper behaviour, sell targeted space to advertisers, or for security reasons like identifying shoplifters.

Read More

Researchers Create Framework to Stop Cyber Attacks

A new study by Maanak Gupta, doctoral candidate at The University of Texas at San Antonio, and Ravi Sandhu, Lutcher Brown Endowed Professor of computer science and founding executive director of the UTSA Institute for Cyber Security (ICS), examines the cybersecurity risks for new generations of smart vehicles, which includes both autonomous and internet-connected cars.

“Driverless and connected cars are increasingly becoming a part of our world, where cybersecurity threats are already a reality,” Sandhu said. “It’s imperative that we support research that addresses these concerns and presents a strong, innovative solution.”

Cars with internet connectivity, also known as “connected cars,” offer potential for many conveniences and innovations. They could allow for real-time and location-sensitive communication between drivers or even pedestrians, which could help make the roads safer for both. The connectivity could also allow the cars to capture safety and environmental conditions around the vehicle, including road obstructions, accidents, which also enables real-time vehicle-to-vehicle interaction on road.

“Connected cars have almost infinite possibilities for creative technological applications,” Gupta said. “Companies could even take advantage of the connectivity to implement location-based marketing tactics, providing drivers with nearby sales and offers.”

However, the researchers caution that as soon as cars are exposed to internet supported functionality, they are also open to the same cybersecurity threats that loom over other electronic devices, such as computers and cell phones. For this reason, Gupta and Sandhu created an authorization framework for connected cars which provides a conceptual overview of various access control decision and enforcement points needed for dynamic and short-lived interaction in smart cars ecosystem.

“There are vulnerabilities in every machine,” said Gupta. “We’re working to make sure someone doesn’t take advantage of those vulnerabilities and turn them into threats. The questions of ‘who do I trust?’ and ‘how do I trust?’ are still to be answered in smart cars.”

Read More

Your face is your passport – Facial Recognition

Australia is a bloody long way from the rest of the world. Fly from Los Angeles to Sydney and you’ll be in the air for 13 hours. Tack on five more if you’re starting in New York. And if you’re coming from London, your feet won’t touch the ground for about a day.

The point being, by the time you land in Australia, you’ll be sick of traveling. You’ll want to get out of the airport and to the country’s excellent beaches as quickly as possible.

That’s why Australia’s Department of Home Affairs is at the forefront of smart border control technology. In 2007, the border agency introduced SmartGates, which read your passport, scan your face and verify who you are at the country’s eight major international airports. Built by Portugal’s Vision-Box, the gates get you out of the airport and into Australia with minimum fuss.

Australia wants to make that process even faster.

During May and June 2017, the country tested the world’s first “contactless” immigration technology at Canberra International Airport. The passport-free facial recognition system confirms a traveller’s identity by matching his or her face against stored data. A second trial is set to start in Canberra soon.

Biometrics aren’t just being used at border control. Sydney Airport has announced it’s teaming up with Qantas, Australia’s largest airline, to use facial recognition to simplify the departure process.

Under a new trial, passengers on select Qantas international flights can have their face and passport scanned at a kiosk when they check in. From then on, they won’t need to present their passport to Qantas staff — they’ll be able to simply scan their face at a kiosk when they drop off luggage, enter the lounge and board their flight at the gate. Travellers will still need to go through regular airport security and official immigration processing, but all of their dealings with Qantas can be handled with facial recognition.

Read More

Smartphone Fingerprint Scanner Gets a Heat-Sensing Upgrade

Fingerprint sensors—once a rarity—are now fairly common on smartphones. South Korean researchers have now given the fingerprint scanner an upgrade.

This new scanner is a clear sensory array, meaning that it could be hidden underneath the display rather than accessed as a button. It can also check the temperature of the fingerprint pressing into it to add an extra layer of security, CNET reports.

So why would your phone need to detect your temperature? It’s not for your health. Instead, it helps ensure that someone else isn’t using a fake hand or some other form of artificial fingerprints to get access to your phone.

Researchers from the Samsung Display-UNIST Center at Ulsan National Institute of Science and Technology in South Korea published an article on Tuesday detailing how they developed the sensor.

“This fingerprint sensor array can be integrated with all transparent forms of tactile pressure sensors and skin temperature sensors, to enable the detection of a finger pressing on the display,” the researchers wrote.

The researchers also confirmed that the sensor does this at a resolution that satisfies the FBI’s criteria for extracting fingerprint patterns.

View Source