School’s Plan for Facial Recognition System Raises Concerns

A New York school district has finished installing a facial recognition system intended to spot potentially dangerous intruders, but state officials concerned about privacy say they want to know more before the technology is put into use.

Education Department spokeswoman Emily DeSantis said Monday that department employees plan to meet with Lockport City School officials about the system being tested this week. In the meantime, she said, the district has said it will not use facial recognition software while it checks other components of the system.

The rapidly developing technology has made its way into airports, motor vehicle departments, stores and stadiums, but is so far rare in public schools.

Lockport is preparing to bring its system online as cities elsewhere are considering reining in the technology’s use. San Francisco in May became the first U.S. city to ban its use by police and other city departments and Oakland is among others considering similar legislation.

A bill by Democrat Assembly Member Monica Wallace would create a one-year moratorium on the technology’s use in New York schools to allow lawmakers time to review it and draft regulations. The legislation is pending.

Lockport Superintendent Michelle Bradley, on the district’s website, said the district’s initial implementation of the system this week will include adjusting cameras mounted throughout the buildings and training staff members who will monitor them from a room in the high school. The system is expected to be fully online on Sept. 1.

Read More

It’s the middle of the night. Do you know who your iPhone is talking to

It’s 3 a.m. Do you know what your iPhone is doing?

Mine has been alarmingly busy. Even though the screen is off and I’m snoring, apps are beaming out lots of information about me to companies I’ve never heard of. Your iPhone probably is doing the same — and Apple could be doing more to stop it.

On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.

And all night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address -— once every five minutes.

Our data has a secret life in many of the devices we use every day, from talking Alexa speakers to smart TVs. But we’ve got a giant blind spot when it comes to the data companies probing our phones.

You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise.

IPhone apps I discovered tracking me by passing information to third parties — just while I was asleep — include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic. According to privacy firm Disconnect, which helped test my iPhone, those unwanted trackers would have spewed out 1.5 gigabytes of data over the span of a month. That’s half of an entire basic wireless service plan from AT&T.

“This is your data. Why should it even leave your phone? Why should it be collected by someone when you don’t know what they’re going to do with it?” says Patrick Jackson, a former National Security Agency researcher who is chief technology officer for Disconnect. He hooked my iPhone into special software so we could examine the traffic. “I know the value of data, and I don’t want mine in any hands where it doesn’t need to be,” he told me.

Read More

Not Only Can Alexa Eavesdrop — She Can Also Testify Against You

When it was revealed last month that a team of Amazon workers were tasked with listening to and reviewing Echo customers’ recordings—including those that customers never intended to record—the news sparked a flurry of criticism and concern regarding what this meant for the average consumer’s privacy.

At the same time, many were left unsurprised. Previous incidents, such as when an Amazon customer in Germany accidentally received someone else’s private Alexa recordings last year, have shown not only that the devices can record when least expected (such as when the user is in the shower, or having a private conversation) but also that these recordings can end up in unexpected hands.

This reality can leave users feeling that the device that helps them control their schedule, their music and even their home appliances isn’t completely within their control. In fact, the Echo can even be used against its owner—and may have the potential to send some users to prison.

As explained by Oxygen Forensics COO Lee Reiber in an interview with Forensic Magazine, when you live with an Alexa device, “it’s almost like your room is bugged.” Of course the “almost” is that Alexa isn’t necessarily always recording, but that doesn’t mean it only records when it’s supposed to either.

“We have a sample Alexa (…) that I utilize to do research on, and there is a lot of information on there. And I found several (recordings) that are specifically marked by Amazon as an error,” said Reiber, who has firsthand experience using Oxygen’s digital forensic tools to extract data from Echo devices. “I’m sitting there in my kitchen and I am talking to my wife, and it’s recording that information.”

Echo devices are meant to record what the user says to it after using a “wake word”—either “Echo,” “Amazon,” “computer” or the classic “Alexa,” depending on what the user prefers. The catch is that Alexa, which always has its microphone on listening for that word, has a habit of mishearing other words or sounds as its wake word, causing it to activate and record the voices or noises that follow.

I’ve noticed this with my own Echo Dot device, which sometimes lights up blue on its own, or startles me with a robotic “I’m sorry, I didn’t catch that” when I never said anything to begin with. Reiber also said those kitchen conversations with his wife were recorded without permission from a wake word, and plenty of other users have reported similar experiences with accidentally waking up their all-hearing assistant.

As Reiber explained, Amazon typically marks unintentional recordings as an error, and in forensic tools like Oxygen’s extractor, they show up marked as discarded items, similar to files someone has deleted from their phone or computer but are still there in the device’s memory. And like these unseen “deleted” files that any skilled digital examiner can recover and view, those accidental recordings are still available to investigators in full—and have the potential to become valuable forensic evidence in a case.

“Because they are already recording, any of these types of IoT (internet of things) devices can be tremendous, because again, if it’s still listening, it could record, and the quality is fantastic,” said Reiber, who also has a law enforcement background. “It’s just a great recording of the person who’s actually speaking. So, someone could say, ‘Well, it wasn’t me, it wasn’t me talking.’ Well, no, it is, it’s an exact recording of your voice.”

Read More

New Forensic Technology Examines Fingerprints Once Considered Too Old

CARLISLE, Pa. (WHTM) - New technology allows investigators to examine fingerprints that were once considered too old or compromised to analyze.

A vacuum metal deposition instrument is now in the hands of Cumberland County to better collect fingerprints and DNA. This equipment is only the second of its kind in Pennsylvania and one of 14 in the entire country.

“Gold will deposit on the substrate and then I will run zinc, and zinc doesn’t attach anywhere else except to a different metal and then it will attach to the gold that I’ve placed,” said Carol McCandless, the lead forensic investigator for Cumberland County.

The vacuum sucks all the air and water out of a chamber, then the machine coats the evidence with a very thin metal film under a high vacuum, all done in less than five minutes.

“The metallic substances don’t land on the top of the ridges of anything. It goes in between so that the top of the ridge is touched and that’s where the DNA is,” said Skip Ebert, Cumberland County District Attorney.

This machine locates fingerprints from items that were previously tough or impossible to extract before, things like paper, waxy substances, and clothing.

“What we did in this machine of the actual victim’s face being suffocated and on the opposite side of the pillowcase, the actual hands that were pushing it down on his face. You cannot beat that kind of evidence anywhere,” said Ebert.

This not only helps current and future cases.

“I have received several cold cases, one from 1983 and one from 1995,” said McCandless.

The Cumberland County Forensic Lab is expected to be fully accredited in the fall, thanks largely to this new technology, made possible through a grant from the Pennsylvania Commission on Crime and Delinquency.

View Source

The Critical Role of Digital Forensics in IP Theft Litigation

In a world where an organization’s trade secrets can be compromised with a few clicks, identifying whether or not intellectual property (IP) theft took place can be a complex process for many reasons.

Since many IP theft perpetrators are internal staff, asking internal IT staff to investigate may uncover issues of bias or conflicts of interest. Additionally, IT staff may not have the experience or training necessary to properly preserve the evidence gathered. Relying upon an experienced digital forensics firm will address both of these complexities given their expertise and unbiased third-party standing.

The virtual nature of digital assets simplify the IP theft process and also complicate any investigation into wrongdoing. Plus, these analyses cannot be understood within the standard criminal investigation framework. All gathered materials should be shared with a digital forensic specialist. What the forensic analyst is trying to determine is whether the materials have probative value (i.e., possessing relevance for the case in question). Digital forensics is a unique way to handle the potential IP theft investigations.

Preservation is a key principle in IP theft investigations just as it is with any other crime scene: everything ideally stays as it was at the time of the crime, as indicated by security training firm, the InfoSec Institute. Access to all devices should be stopped and all access should be blocked when IP theft is first suspected or discovered. Experienced analysts then systematically categorize and collect data to better understand whether a crime occurred. Key materials can be damaged or destroyed if someone without a forensics background attempts to access the digital evidence. If someone intrudes without proper credentials, the evidence is essentially contaminated which may lead to halted investigations, lost lawsuits, and the failure to return the IP to the rightful owner.

Read More

Half a Face Enough for Recognition Technology

Facial recognition technology works even when only half a face is visible, researchers from the University of Bradford have found.

Using artificial intelligence techniques, the team achieved 100 percent recognition rates for both three-quarter and half faces. The study, published in Future Generation Computer Systems, is the first to use machine learning to test the recognition rates for different parts of the face.

Lead researcher, Professor Hassan Ugail from the University of Bradford said: “The ability humans have to recognise faces is amazing, but research has shown it starts to falter when we can only see parts of a face. Computers can already perform better than humans in recognising one face from a large number, so we wanted to see if they would be better at partial facial recognition as well.”

The team used a machine learning technique known as a “convolutional neural network,” drawing on a feature extraction model called VGG—one of the most popular and widely used for facial recognition.

They worked with a dataset containing multiple photos—2,800 in total—of 200 students and staff from FEI University in Brazil, with equal numbers of men and women.

For the first experiment, the team trained the model using only full facial images They then ran an experiment to see how well the computer was able to recognize faces, even when shown only part of them. The computer recognized full faces 100 percent of the time, but the team also had 100 percent success with three-quarter faces and with the top or right half of the face. However, the bottom half of the face was only correctly recognized 60 percent of the time, and eyes and nose on their own just 40 percent.

They then ran the experiment again, after training the model using partial facial images as well. This time, the scores significantly improved for the bottom half of the face, for eyes and nose on their own and even for faces with no eyes and nose visible, achieving around 90 percent correct identification.

Individual facial parts, such as the nose, cheek, forehead or mouth had low recognition rates in both experiments.

Read More

THE FBI WANTED A BACKDOOR TO THE IPHONE. TIM COOK SAID NO

IN 2016, TIM Cook fought the law—and won.

Late in the afternoon of Tuesday, February 16, 2016, Cook and several lieutenants gathered in the “junior boardroom” on the executive floor at One Infinite Loop, Apple’s old headquarters. The company had just received a writ from a US magistrate ordering it to make specialized software that would allow the FBI to unlock an iPhone used by Syed Farook, a suspect in the San Bernardino shooting in December 2015 that left 14 people dead.

The iPhone was locked with a four-digit passcode that the FBI had been unable to crack. The FBI wanted Apple to create a special version of iOS that would accept an unlimited combination of passwords electronically, until the right one was found. The new iOS could be side-loaded onto the iPhone, leaving the data intact.

But Apple had refused. Cook and his team were convinced that a new unlocked version of iOS would be very, very dangerous. It could be misused, leaked, or stolen, and once in the wild, it could never be retrieved. It could potentially undermine the security of hundreds of millions of Apple users.

In the boardroom, Cook and his team went through the writ line by line. They needed to decide what Apple’s legal position was going to be and figure out how long they had to respond. It was a stressful, high-stakes meeting. Apple was given no warning about the writ, even though Cook, Apple’s top lawyer, Bruce Sewell, and others had been actively speaking about the case to law enforcement for weeks.

The writ “was not a simple request for assistance in a criminal case,” explained Sewell. “It was a forty-two-page pleading by the government that started out with this litany of the horrible things that had been done in San Bernardino. And then this . . . somewhat biased litany of all the times that Apple had said no to what were portrayed as very reasonable requests. So this was what, in the law, we call a speaking complaint. It was meant to from day one tell a story . . . that would get the public against Apple.”

The team came to the conclusion that the judge’s order was a PR move—a very public arm twisting to pressure Apple into complying with the FBI’s demands—and that it could be serious trouble for the company. Apple “is a famous, incredibly powerful consumer brand and we are going to be standing up against the FBI and saying in effect, ‘No, we’re not going to give you the thing that you’re looking for to try to deal with this terrorist threat,’” said Sewell.

They knew that they had to respond immediately. The writ would dominate the next day’s news, and Apple had to have a response. “Tim knew that this was a massive decision on his part,” Sewell said. It was a big moment, “a bet-the-company kind of decision.” Cook and the team stayed up all night—a straight 16 hours—working on their response. Cook already knew his position—Apple would refuse—but he wanted to know all the angles: What was Apple’s legal position? What was its legal obligation? Was this the right response? How should it sound? How should it read? What was the right tone?

Read More

Tracking Phones, Google Is a Dragnet for the Police

When detectives in a Phoenix suburb arrested a warehouse worker in a murder investigation last December, they credited a new technique with breaking open the case after other leads went cold.

The police told the suspect, Jorge Molina, they had data tracking his phone to the site where a man was shot nine months earlier. They had made the discovery after obtaining a search warrant that required Google to provide information on all devices it recorded near the killing, potentially capturing the whereabouts of anyone in the area.

Investigators also had other circumstantial evidence, including security video of someone firing a gun from a white Honda Civic, the same model that Mr. Molina owned, though they could not see the license plate or attacker.

But after he spent nearly a week in jail, the case against Mr. Molina fell apart as investigators learned new information and released him. Last month, the police arrested another man: his mother’s ex-boyfriend, who had sometimes used Mr. Molina’s car.

The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement.

The Arizona case demonstrates the promise and perils of the new investigative technique, whose use has risen sharply in the past six months, according to Google employees familiar with the requests. It can help solve crimes. But it can also snare innocent people.

Technology companies have for years responded to court orders for specific users’ information. The new warrants go further, suggesting possible suspects and witnesses in the absence of other clues. Often, Google employees said, the company responds to a single warrant with location information on dozens or hundreds of devices.

Law enforcement officials described the method as exciting, but cautioned that it was just one tool.

Read More

Evaluating the Use of Automated Facial Recognition Technology in Major Policing Operations

Academics at Cardiff University have conducted the first independent academic evaluation of Automated Facial Recognition (AFR) technology across a variety of major policing operations.

The project by the Universities’ Police Science Institute evaluated South Wales Police’s deployment of Automated Facial Recognition across several major sporting and entertainment events in Cardiff city over more than a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals.

The study found that while AFR can enable police to identify persons of interest and suspects where they would probably not otherwise have been able to do so, considerable investment and changes to police operating procedures are required to generate consistent results.

Researchers employed a number of research methods to develop a rich picture and systematically evaluate the use of AFR by police across multiple operational settings. This is important as previous research on the use of AFR technologies has tended to be conducted in controlled conditions. Using it on the streets and to support ongoing criminal investigations introduces a range of factors impacting the effectiveness of AFR in supporting police work.

The technology works in two modes: Locate is the live, real-time application that scans faces within CCTV feeds in an area. It searches for possible matches against a pre-selected database of facial images of individuals deemed to be persons of interest by the police.

Identify, on the other hand, takes still images of unidentified persons (usually captured via CCTV or mobile phone camera) and compares these against the police custody database in an effort to generate investigative leads. Evidence from the research found that in 68 percent of submissions made by police officers in the Identify mode, the image was not of sufficient quality for the system to work.

Over the period of the evaluation, however, the accuracy of the technology improved significantly and police got better at using it. The Locate system was able to correctly identify a person of interest around 76 percent of the time. A total of 18 arrests were made in ‘live Locate deployments during the evaluation, and in excess of 100 people were charged following investigative searches during the first 8-9 months of the AFR Identify operation (end of July 2017-March 2018).

The report suggests that it is more helpful to think of AFR in policing as ‘Assisted Facial Recognition’ rather than a fully ‘Automated Facial Recognition’ system. ‘Automated’ implies that the identification process is conducted solely by an algorithm, when in fact, the system serves as a decision-support tool to assist human operators in making identifications. Ultimately, decisions about whether a person of interest and an image match are made by police operators. It is also deployed in uncontrolled environments, and so is impacted by external factors including lighting, weather and crowd flows.

“There is increasing public and political awareness of the pressures that the police are under to try and prevent and solve crime. Technologies such as Automated Facial Recognition are being proposed as having an important role to play in these efforts. What we have tried to do with this research is provide an evidence-based and balanced account of the benefits, costs and challenges associated with integrating AFR into day-to-day policing,” says Professor Martin Innes, director, Crime and Security Research Institute and Director, Universities’ Police Science Institute.

Read More

An $80 Million Cyber Crime in 1999 Foreshadowed Modern Threats

Two decades ago, computer viruses—and public awareness of the tricks used to unleash them—were still relatively new notions to many Americans.

One attack would change that in a significant way.

In late March 1999, a programmer named David Lee Smith hijacked an America Online (AOL) account and used it to post a file on an Internet newsgroup named “alt.sex.” The posting promised dozens of free passwords to fee-based websites with adult content. When users took the bait, downloading the document and then opening it with Microsoft Word, a virus was unleashed on their computers.

On March 26, it began spreading like wildfire across the Internet.

The Melissa virus, reportedly named by Smith for a stripper in Florida, started by taking over victims’ Microsoft Word program. It then used a macro to hijack their Microsoft Outlook email system and send messages to the first 50 addresses in their mailing lists. Those messages, in turn, tempted recipients to open a virus-laden attachment by giving it such names as “sexxxy.jpg” or “naked wife” or by deceitfully asserting, “Here is the document you requested … don’t show anyone else ;-) .” With the help of some devious social engineering, the virus operated like a sinister, automated chain letter.

The virus was not intended to steal money or information, but it wreaked plenty of havoc nonetheless. Email servers at more than 300 corporations and government agencies worldwide became overloaded, and some had to be shut down entirely, including at Microsoft. Approximately one million email accounts were disrupted, and Internet traffic in some locations slowed to a crawl.

Within a few days, cybersecurity experts had mostly contained the spread of the virus and restored the functionality of their networks, although it took some time to remove the infections entirely. Along with its investigative role, the FBI sent out warnings about the virus and its effects, helping to alert the public and reduce the destructive impacts of the attack. Still, the collective damage was enormous: an estimated $80 million for the cleanup and repair of affected computer systems.

Finding the culprit didn’t take long, thanks to a tip from a representative of AOL and nearly seamless cooperation between the FBI, New Jersey law enforcement, and other partners. Authorities traced the electronic fingerprints of the virus to Smith, who was arrested in northeastern New Jersey on April 1, 1999. Smith pleaded guilty in December 1999, and in May 2002, he was sentenced to 20 months in federal prison and fined $5,000. He also agreed to cooperate with federal and state authorities.

The Melissa virus, considered the fastest spreading infection at the time, was a rude awakening to the dark side of the web for many Americans. Awareness of the danger of opening unsolicited email attachments began to grow, along with the reality of online viruses and the damage they can do.

Read More