Archive for 'Technology'

Facial recognition technology works even when only half a face is visible, researchers from the University of Bradford have found.

Using artificial intelligence techniques, the team achieved 100 percent recognition rates for both three-quarter and half faces. The study, published in Future Generation Computer Systems, is the first to use machine learning to test the recognition rates for different parts of the face.

Lead researcher, Professor Hassan Ugail from the University of Bradford said: “The ability humans have to recognise faces is amazing, but research has shown it starts to falter when we can only see parts of a face. Computers can already perform better than humans in recognising one face from a large number, so we wanted to see if they would be better at partial facial recognition as well.”

The team used a machine learning technique known as a “convolutional neural network,” drawing on a feature extraction model called VGG—one of the most popular and widely used for facial recognition.

They worked with a dataset containing multiple photos—2,800 in total—of 200 students and staff from FEI University in Brazil, with equal numbers of men and women.

For the first experiment, the team trained the model using only full facial images They then ran an experiment to see how well the computer was able to recognize faces, even when shown only part of them. The computer recognized full faces 100 percent of the time, but the team also had 100 percent success with three-quarter faces and with the top or right half of the face. However, the bottom half of the face was only correctly recognized 60 percent of the time, and eyes and nose on their own just 40 percent.

They then ran the experiment again, after training the model using partial facial images as well. This time, the scores significantly improved for the bottom half of the face, for eyes and nose on their own and even for faces with no eyes and nose visible, achieving around 90 percent correct identification.

Individual facial parts, such as the nose, cheek, forehead or mouth had low recognition rates in both experiments.

Read More

IN 2016, TIM Cook fought the law—and won.

Late in the afternoon of Tuesday, February 16, 2016, Cook and several lieutenants gathered in the “junior boardroom” on the executive floor at One Infinite Loop, Apple’s old headquarters. The company had just received a writ from a US magistrate ordering it to make specialized software that would allow the FBI to unlock an iPhone used by Syed Farook, a suspect in the San Bernardino shooting in December 2015 that left 14 people dead.

The iPhone was locked with a four-digit passcode that the FBI had been unable to crack. The FBI wanted Apple to create a special version of iOS that would accept an unlimited combination of passwords electronically, until the right one was found. The new iOS could be side-loaded onto the iPhone, leaving the data intact.

But Apple had refused. Cook and his team were convinced that a new unlocked version of iOS would be very, very dangerous. It could be misused, leaked, or stolen, and once in the wild, it could never be retrieved. It could potentially undermine the security of hundreds of millions of Apple users.

In the boardroom, Cook and his team went through the writ line by line. They needed to decide what Apple’s legal position was going to be and figure out how long they had to respond. It was a stressful, high-stakes meeting. Apple was given no warning about the writ, even though Cook, Apple’s top lawyer, Bruce Sewell, and others had been actively speaking about the case to law enforcement for weeks.

The writ “was not a simple request for assistance in a criminal case,” explained Sewell. “It was a forty-two-page pleading by the government that started out with this litany of the horrible things that had been done in San Bernardino. And then this . . . somewhat biased litany of all the times that Apple had said no to what were portrayed as very reasonable requests. So this was what, in the law, we call a speaking complaint. It was meant to from day one tell a story . . . that would get the public against Apple.”

The team came to the conclusion that the judge’s order was a PR move—a very public arm twisting to pressure Apple into complying with the FBI’s demands—and that it could be serious trouble for the company. Apple “is a famous, incredibly powerful consumer brand and we are going to be standing up against the FBI and saying in effect, ‘No, we’re not going to give you the thing that you’re looking for to try to deal with this terrorist threat,’” said Sewell.

They knew that they had to respond immediately. The writ would dominate the next day’s news, and Apple had to have a response. “Tim knew that this was a massive decision on his part,” Sewell said. It was a big moment, “a bet-the-company kind of decision.” Cook and the team stayed up all night—a straight 16 hours—working on their response. Cook already knew his position—Apple would refuse—but he wanted to know all the angles: What was Apple’s legal position? What was its legal obligation? Was this the right response? How should it sound? How should it read? What was the right tone?

Read More

Two decades ago, computer viruses—and public awareness of the tricks used to unleash them—were still relatively new notions to many Americans.

One attack would change that in a significant way.

In late March 1999, a programmer named David Lee Smith hijacked an America Online (AOL) account and used it to post a file on an Internet newsgroup named “alt.sex.” The posting promised dozens of free passwords to fee-based websites with adult content. When users took the bait, downloading the document and then opening it with Microsoft Word, a virus was unleashed on their computers.

On March 26, it began spreading like wildfire across the Internet.

The Melissa virus, reportedly named by Smith for a stripper in Florida, started by taking over victims’ Microsoft Word program. It then used a macro to hijack their Microsoft Outlook email system and send messages to the first 50 addresses in their mailing lists. Those messages, in turn, tempted recipients to open a virus-laden attachment by giving it such names as “sexxxy.jpg” or “naked wife” or by deceitfully asserting, “Here is the document you requested … don’t show anyone else ;-) .” With the help of some devious social engineering, the virus operated like a sinister, automated chain letter.

The virus was not intended to steal money or information, but it wreaked plenty of havoc nonetheless. Email servers at more than 300 corporations and government agencies worldwide became overloaded, and some had to be shut down entirely, including at Microsoft. Approximately one million email accounts were disrupted, and Internet traffic in some locations slowed to a crawl.

Within a few days, cybersecurity experts had mostly contained the spread of the virus and restored the functionality of their networks, although it took some time to remove the infections entirely. Along with its investigative role, the FBI sent out warnings about the virus and its effects, helping to alert the public and reduce the destructive impacts of the attack. Still, the collective damage was enormous: an estimated $80 million for the cleanup and repair of affected computer systems.

Finding the culprit didn’t take long, thanks to a tip from a representative of AOL and nearly seamless cooperation between the FBI, New Jersey law enforcement, and other partners. Authorities traced the electronic fingerprints of the virus to Smith, who was arrested in northeastern New Jersey on April 1, 1999. Smith pleaded guilty in December 1999, and in May 2002, he was sentenced to 20 months in federal prison and fined $5,000. He also agreed to cooperate with federal and state authorities.

The Melissa virus, considered the fastest spreading infection at the time, was a rude awakening to the dark side of the web for many Americans. Awareness of the danger of opening unsolicited email attachments began to grow, along with the reality of online viruses and the damage they can do.

Read More

Academics at Cardiff University have conducted the first independent academic evaluation of Automated Facial Recognition (AFR) technology across a variety of major policing operations.

The project by the Universities’ Police Science Institute evaluated South Wales Police’s deployment of Automated Facial Recognition across several major sporting and entertainment events in Cardiff city over more than a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals.

The study found that while AFR can enable police to identify persons of interest and suspects where they would probably not otherwise have been able to do so, considerable investment and changes to police operating procedures are required to generate consistent results.

Researchers employed a number of research methods to develop a rich picture and systematically evaluate the use of AFR by police across multiple operational settings. This is important as previous research on the use of AFR technologies has tended to be conducted in controlled conditions. Using it on the streets and to support ongoing criminal investigations introduces a range of factors impacting the effectiveness of AFR in supporting police work.

The technology works in two modes: Locate is the live, real-time application that scans faces within CCTV feeds in an area. It searches for possible matches against a pre-selected database of facial images of individuals deemed to be persons of interest by the police.

Identify, on the other hand, takes still images of unidentified persons (usually captured via CCTV or mobile phone camera) and compares these against the police custody database in an effort to generate investigative leads. Evidence from the research found that in 68 percent of submissions made by police officers in the Identify mode, the image was not of sufficient quality for the system to work.

Over the period of the evaluation, however, the accuracy of the technology improved significantly and police got better at using it. The Locate system was able to correctly identify a person of interest around 76 percent of the time. A total of 18 arrests were made in ‘live Locate deployments during the evaluation, and in excess of 100 people were charged following investigative searches during the first 8-9 months of the AFR Identify operation (end of July 2017-March 2018).

The report suggests that it is more helpful to think of AFR in policing as ‘Assisted Facial Recognition’ rather than a fully ‘Automated Facial Recognition’ system. ‘Automated’ implies that the identification process is conducted solely by an algorithm, when in fact, the system serves as a decision-support tool to assist human operators in making identifications. Ultimately, decisions about whether a person of interest and an image match are made by police operators. It is also deployed in uncontrolled environments, and so is impacted by external factors including lighting, weather and crowd flows.

Read More

Social media is increasingly being exploited to contact, recruit and sell children for sex, according to a study by The University of Toledo Human Trafficking and Social Justice Institute.

The study, which was requested by the Ohio Attorney General’s Human Trafficking Commission, reveals how traffickers quickly target and connect with vulnerable children on the Internet through social media.

“It is vitally important to educate parents, professionals and youth – especially our middle school or teenage daughters who may be insecure – about the dangers of online predatory practices used by master manipulators,” said Dr. Celia Williamson, UT professor of social work and director of the UT Human Trafficking and Social Justice Institute. “Through this outreach and education, we can help save children from becoming victims of modern-day slavery.”

“We know predators are using the internet to find their victims, and this eye-opening study highlights what a predator looks for in a victim and helps parents recognize the signs that their child may be a target,” Ohio Attorney General Mike DeWine said. “Using real-life examples, this study provides valuable information that parents can use to start open and honest conversations with their children about staying safe online.”

Through a series of 16 in-depth interviews by the institute’s staff and student interns with knowledgeable members of Ohio law enforcement, judges, direct service providers, advocates and researchers who engaged with victims who were trafficked online, the study outlines how traffickers connect to vulnerable youth online, groom the children to form quicker relationships, avoid detection, and move the connections from online to in-person.

“The transition from messaging to meeting a trafficker in person is becoming less prevalent,” Williamson said. “As technology is playing a larger role in trafficking, this allows some traffickers to be able to exploit youth without meeting face-to-face. Social media helps to mask traditional cues that alert individuals to a potentially dangerous person.”

Williamson cites a 2018 report that says while 58 percent of victims eventually meet their traffickers face to face, 42 percent who initially met their trafficker online never met their trafficker in person and were still trafficked.

The experts, whose identities are not being released, said the traffickers educate themselves by studying what the victim posts on commonly used view-and-comment sites such as Facebook, Instagram or SnapChat, as well as dating apps such as Tinder, Blendr and Yellow, or webcam sites like Chatroulette and Monkey, in order to build trust.

“These guys, they learn about the girls and pretend to understand them, and so these girls, who are feeling not understood and not loved and not beautiful … these guys are very good at sort of pretending that they are all of these things and they really understand them and, ‘I know how you feel, you are beautiful,’ and just filling the hole that these girls are feeling,” said a professional contributing to the study.

Read More

A California judge has ruled that American cops can’t force people to unlock a mobile phone with their face or finger. The ruling goes further to protect people’s private lives from government searches than any before and is being hailed as a potentially landmark decision.

Previously, U.S. judges had ruled that police were allowed to force unlock devices like Apple’s iPhone with biometrics, such as fingerprints, faces or irises. That was despite the fact feds weren’t permitted to force a suspect to divulge a passcode. But according to a ruling uncovered by Forbes, all logins are equal.

The order came from the U.S. District Court for the Northern District of California in the denial of a search warrant for an unspecified property in Oakland. The warrant was filed as part of an investigation into a Facebook extortion crime, in which a victim was asked to pay up or have an “embarassing” video of them publicly released. The cops had some suspects in mind and wanted to raid their property. In doing so, the feds also wanted to open up any phone on the premises via facial recognition, a fingerprint or an iris.

While the judge agreed that investigators had shown probable cause to search the property, they didn’t have the right to open all devices inside by forcing unlocks with biometric features.

On the one hand, magistrate judge Kandis Westmore ruled the request was “overbroad” as it was “neither limited to a particular person nor a particular device.”

But in a more significant part of the ruling, Judge Westmore declared that the government did not have the right, even with a warrant, to force suspects to incriminate themselves by unlocking their devices with their biological features. Previously, courts had decided biometric features, unlike passcodes, were not “testimonial.” That was because a suspect would have to willingly and verbally give up a passcode, which is not the case with biometrics. A password was therefore deemed testimony, but body parts were not, and so not granted Fifth Amendment protections against self-incrimination.

Read More

Two Chinese men have been charged in a massive, years-long hacking campaign that stole personal and proprietary information from companies around the world, the FBI and the Justice Department announced at a press conference today in Washington, D.C.

The men, Zhu Hua and Zhang Shilong, are part of a group known as Advanced Persistent Threat 10, or APT 10, a hacking group associated with the Chinese government. A New York grand jury indicted the pair for conspiracy to commit computer intrusion, conspiracy to commit wire fraud, and aggravated identity theft. The indictment was unsealed today.

According to the indictment, from around 2006 to 2018, APT 10 conducted extensive hacking campaigns, stealing information from more than 45 victim organizations, including American companies. Hundreds of gigabytes of sensitive data were secretly taken from companies in a diverse range of industries, such as health care, biotechnology, finance, manufacturing, and oil and gas.

FBI Director Christopher Wray described the list of companies, not named in the indictment, as a “Who’s Who” of the global economy. Even government agencies like NASA and the Department of Energy were among the victims. The hack is part of China’s ongoing efforts to steal intellectual property from other countries.

“Healthy competition is good for the global economy. Criminal conduct is not. Rampant theft is not. Cheating is not,” Wray said at the press conference.

APT 10 used “spear phishing” techniques to introduce malware onto targeted computers. The hackers sent emails that appeared to be from legitimate addresses but contained attachments that installed a program to secretly record all keystrokes on the machine, including user names and passwords. The group also targeted managed service providers (MSPs), companies that remotely manage their clients’ servers and networks. MSP hacks allowed APT 10 members to indirectly gain access to confidential data of numerous companies who were the clients of the MSPs.

Read More

Menlo Park California Aug 26 2017Facebook turns off more than 1 million accounts a day as it struggles to keep spam, fraud and hate speech off its platform, its chief security officer says.

Still, the sheer number of interactions among its 2 billion global users means it can’t catch all “threat actors,” and it sometimes removes text posts and videos that it later finds didn’t break Facebook rules, says Alex Stamos.

“When you’re dealing with millions and millions of interactions, you can’t create these rules and enforce them without (getting some) false positives,” Stamos said during an onstage discussion at an event in San Francisco on Wednesday evening.

Stamos blames the pure technical challenges in enforcing the company’s rules — rather than the rules themselves — for the threatening and unsafe behavior that sometimes finds its way on to the site.

Facebook has faced critics who say its rules for removing content are too arbitrary and make it difficult to know what types of activity it will and won’t allow.

Political leaders in Europe this year have accused it of being too lax in allowing terrorists to use Facebook to recruit and plan attacks, while a U.S. Senate committee last year demanded to know its policies for removing fake news stories, after accusations it was arbitrarily removing posts by political conservatives.

Free speech advocates have also criticized its work.

“The work of (Facebook) take-down teams is not transparent,” said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, which advocates for free speech online.

“The rules are not enforced across the board. They reflect biases,” says Galperin, who shared the stage with Stamos at a public event that was part of Enigma Interviews, a series of cybersecurity discussions sponsored by the Advanced Computing Systems Association, better known as USENIX.

Stamos pushed back during the discussion, saying “it’s not just a bunch of white guys” who make decisions about what posts to remove.

Read More

Los Angeles CA Aug 15 2018 Los Angeles’s transit agency said Tuesday that it would become the first in the nation to screen its passengers with body scanners as they enter the public transit system — a bold effort to keep riders safer from terrorism and other evolving threats.

But officials said that riders need not worry that their morning commute would turn into the sort of security nightmare often found at airports or even sporting events. In a statement released Tuesday, transit officials said the portable screening devices they plan to deploy later this year will “quickly and unobtrusively” screen riders without forcing them to line up or stop walking.

“We’re looking specifically for weapons that have the ability to cause a mass casualty event,” Alex Wiggins, the chief security and law enforcement officer for the Los Angeles County Metropolitan Transportation Authority, said Tuesday, according to The Associated Press. “We’re looking for explosive vests, we’re looking for assault rifles. We’re not necessarily looking for smaller weapons that don’t have the ability to inflict mass casualties.”

The devices themselves resemble the sort of black laminate cases that musicians lug around on tour — not upright metal detectors. Dave Sotero, a spokesman for Metro, said the machines, which are on wheels, can detect suspicious items from 30 feet away and can scan more than 2,000 passengers per hour. The units can be pointed in the direction of riders as they come down an escalator or into a station.

“Most people won’t even know they’re being scanned, so there’s no risk of them missing their train service on a daily basis,” he said.

Mr. Sotero said the agency had purchased several of the units for about $100,000 each, but he would not specify exactly how many. He said that the authorities still needed to be trained on how to use the technology.

The county’s metro system has one of the largest riderships in the country, with 93 rail stations alone — and it is set to expand. Mr. Sotero said the new scanning units would be mostly deployed at random stations, but would certainly be used at major transit hubs and in places were large crowds are expected for marches, races and other events.

“There won’t be a deployment pattern that will be predictable,” he said. “They will go where they’re needed.”

Read More

At least two Calgary malls are using facial recognition technology to track shoppers’ ages and genders without first notifying them or obtaining their explicit consent.

A visitor to Chinook Centre in south Calgary spotted a browser window that had seemingly accidentally been left open on one of the mall’s directories, exposing facial-recognition software that was running in the background of the digital map. They took a photo and posted it to the social networking site Reddit on Tuesday.

The mall’s parent company, Cadillac Fairview, said the software, which they began using in June, counts people who use the directory and predicts their approximate age and gender, but does not record or store any photos or video from the directory cameras.

Cadillac Fairview said the software is also used at Market Mall in northwest Calgary, and other malls nationwide.

“We don’t require consent, because we’re not capturing or retaining images,” a Cadillac Fairview spokesperson said.

The software could, for example, say approximately how many men in their 60s used the directory, but not store images of those men’s faces or collect any other biometric data, the spokesperson said.

Instead, they said the data is used in aggregate to understand directory usage patterns to “create a better shopper experience.”

The use of facial recognition software in retail spaces is becoming commonplace to analyze shopper behaviour, sell targeted space to advertisers, or for security reasons like identifying shoplifters.

Read More