Computer hackers pretending to be from a giant tech company are calling consumers, and gaining access to their bank accounts. One hacker even swindled nearly $25,000 from one local couple.
“They’re so savvy that they can get into your computers and figure out passwords just by the click of the keys,” said Nancy Isdale.
Isdale and her husband George say they thought they were getting money from Microsoft until they were swindled out of $24,600. The hacker, who told the couple his name was Sean, made it seem like he was a tech support expert and that he was refunding the couple $400 on behalf of Microsoft, but instead he was fooling them into giving him remote access to their computer.
“Once they get into the computer you can see the mouse going around so they are into your computer,” explained George.
Then the couple said the scammer gained access to their money on their computer by saying they could help them set up online access for all of their bank accounts compromising their accounts.
“So that’s what they did, they took the money out of my savings, [and] put it in his checking account,” said Nancy.
Without them knowing, “Sean” took $25,000 from Nancy’s savings account and transferred it to George’s checking. Then the scammer said he mistakenly gave George a $25,000 Microsoft credit instead of that $400 credit, and that George needed to send $24,600 back.
“He was like crazy, he was like ‘oh my god this isn’t your money this is Microsoft’s money you need to get to the bank right away and wire transfer this’,” said Nancy.
What they ended up doing is sending their own hard earned money to that scammer in Bangkok, Thailand.
“You know I was nervous, I didn’t want to be responsible for $25,000 dollars to Microsoft so, you know, we went to the bank,” explained Nancy.
Just when they thought it was the end of it, the thief called them back a few days later demanding even more money.
“He wanted us to send $40,000 to Bangkok Thailand again,” explained Nancy.
It’s 3 a.m. Do you know what your iPhone is doing?
Mine has been alarmingly busy. Even though the screen is off and I’m snoring, apps are beaming out lots of information about me to companies I’ve never heard of. Your iPhone probably is doing the same — and Apple could be doing more to stop it.
On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.
And all night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address -— once every five minutes.
Our data has a secret life in many of the devices we use every day, from talking Alexa speakers to smart TVs. But we’ve got a giant blind spot when it comes to the data companies probing our phones.
You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise.
IPhone apps I discovered tracking me by passing information to third parties — just while I was asleep — include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.
And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic. According to privacy firm Disconnect, which helped test my iPhone, those unwanted trackers would have spewed out 1.5 gigabytes of data over the span of a month. That’s half of an entire basic wireless service plan from AT&T.
“This is your data. Why should it even leave your phone? Why should it be collected by someone when you don’t know what they’re going to do with it?” says Patrick Jackson, a former National Security Agency researcher who is chief technology officer for Disconnect. He hooked my iPhone into special software so we could examine the traffic. “I know the value of data, and I don’t want mine in any hands where it doesn’t need to be,” he told me.
When it was revealed last month that a team of Amazon workers were tasked with listening to and reviewing Echo customers’ recordings—including those that customers never intended to record—the news sparked a flurry of criticism and concern regarding what this meant for the average consumer’s privacy.
At the same time, many were left unsurprised. Previous incidents, such as when an Amazon customer in Germany accidentally received someone else’s private Alexa recordings last year, have shown not only that the devices can record when least expected (such as when the user is in the shower, or having a private conversation) but also that these recordings can end up in unexpected hands.
This reality can leave users feeling that the device that helps them control their schedule, their music and even their home appliances isn’t completely within their control. In fact, the Echo can even be used against its owner—and may have the potential to send some users to prison.
As explained by Oxygen Forensics COO Lee Reiber in an interview with Forensic Magazine, when you live with an Alexa device, “it’s almost like your room is bugged.” Of course the “almost” is that Alexa isn’t necessarily always recording, but that doesn’t mean it only records when it’s supposed to either.
“We have a sample Alexa (…) that I utilize to do research on, and there is a lot of information on there. And I found several (recordings) that are specifically marked by Amazon as an error,” said Reiber, who has firsthand experience using Oxygen’s digital forensic tools to extract data from Echo devices. “I’m sitting there in my kitchen and I am talking to my wife, and it’s recording that information.”
Echo devices are meant to record what the user says to it after using a “wake word”—either “Echo,” “Amazon,” “computer” or the classic “Alexa,” depending on what the user prefers. The catch is that Alexa, which always has its microphone on listening for that word, has a habit of mishearing other words or sounds as its wake word, causing it to activate and record the voices or noises that follow.
I’ve noticed this with my own Echo Dot device, which sometimes lights up blue on its own, or startles me with a robotic “I’m sorry, I didn’t catch that” when I never said anything to begin with. Reiber also said those kitchen conversations with his wife were recorded without permission from a wake word, and plenty of other users have reported similar experiences with accidentally waking up their all-hearing assistant.
As Reiber explained, Amazon typically marks unintentional recordings as an error, and in forensic tools like Oxygen’s extractor, they show up marked as discarded items, similar to files someone has deleted from their phone or computer but are still there in the device’s memory. And like these unseen “deleted” files that any skilled digital examiner can recover and view, those accidental recordings are still available to investigators in full—and have the potential to become valuable forensic evidence in a case.
“Because they are already recording, any of these types of IoT (internet of things) devices can be tremendous, because again, if it’s still listening, it could record, and the quality is fantastic,” said Reiber, who also has a law enforcement background. “It’s just a great recording of the person who’s actually speaking. So, someone could say, ‘Well, it wasn’t me, it wasn’t me talking.’ Well, no, it is, it’s an exact recording of your voice.”
CARLISLE, Pa. (WHTM) - New technology allows investigators to examine fingerprints that were once considered too old or compromised to analyze.
A vacuum metal deposition instrument is now in the hands of Cumberland County to better collect fingerprints and DNA. This equipment is only the second of its kind in Pennsylvania and one of 14 in the entire country.
“Gold will deposit on the substrate and then I will run zinc, and zinc doesn’t attach anywhere else except to a different metal and then it will attach to the gold that I’ve placed,” said Carol McCandless, the lead forensic investigator for Cumberland County.
The vacuum sucks all the air and water out of a chamber, then the machine coats the evidence with a very thin metal film under a high vacuum, all done in less than five minutes.
“The metallic substances don’t land on the top of the ridges of anything. It goes in between so that the top of the ridge is touched and that’s where the DNA is,” said Skip Ebert, Cumberland County District Attorney.
This machine locates fingerprints from items that were previously tough or impossible to extract before, things like paper, waxy substances, and clothing.
“What we did in this machine of the actual victim’s face being suffocated and on the opposite side of the pillowcase, the actual hands that were pushing it down on his face. You cannot beat that kind of evidence anywhere,” said Ebert.
This not only helps current and future cases.
“I have received several cold cases, one from 1983 and one from 1995,” said McCandless.
The Cumberland County Forensic Lab is expected to be fully accredited in the fall, thanks largely to this new technology, made possible through a grant from the Pennsylvania Commission on Crime and Delinquency.
In a world where an organization’s trade secrets can be compromised with a few clicks, identifying whether or not intellectual property (IP) theft took place can be a complex process for many reasons.
Since many IP theft perpetrators are internal staff, asking internal IT staff to investigate may uncover issues of bias or conflicts of interest. Additionally, IT staff may not have the experience or training necessary to properly preserve the evidence gathered. Relying upon an experienced digital forensics firm will address both of these complexities given their expertise and unbiased third-party standing.
The virtual nature of digital assets simplify the IP theft process and also complicate any investigation into wrongdoing. Plus, these analyses cannot be understood within the standard criminal investigation framework. All gathered materials should be shared with a digital forensic specialist. What the forensic analyst is trying to determine is whether the materials have probative value (i.e., possessing relevance for the case in question). Digital forensics is a unique way to handle the potential IP theft investigations.
Preservation is a key principle in IP theft investigations just as it is with any other crime scene: everything ideally stays as it was at the time of the crime, as indicated by security training firm, the InfoSec Institute. Access to all devices should be stopped and all access should be blocked when IP theft is first suspected or discovered. Experienced analysts then systematically categorize and collect data to better understand whether a crime occurred. Key materials can be damaged or destroyed if someone without a forensics background attempts to access the digital evidence. If someone intrudes without proper credentials, the evidence is essentially contaminated which may lead to halted investigations, lost lawsuits, and the failure to return the IP to the rightful owner.
WASHINGTON (AP) — U.S. Immigration and Customs Enforcement will begin voluntary DNA testing in cases where officials suspect that adults are fraudulently claiming to be parents of children as they cross the U.S.-Mexico border.
The decision comes as Homeland Security officials are increasingly concerned about instances of child trafficking as a growing number of Central American families cross the border, straining resources to the breaking point. Border authorities also recently started to increase the biometric data they take from children 13 and younger, including fingerprints, despite privacy concerns and government policy that restricts what can be collected.
Officials with the Department of Homeland Security wouldn’t say where the DNA pilot program would begin, but they did say it would start as early as next week and would be very limited.
The DNA testing will happen once officials with U.S. Customs and Border Protection, which manages border enforcement, refer possible instances of fraud to ICE, which usually manages interior enforcement. But teams from ICE’s Homeland Security Investigations were recently deployed to the southern border to help investigate human smuggling efforts.
The rapid DNA test will take about two hours and will be obtained using a cheek swab from both the adult and child. The parent is to swab the child, officials said.
The testing will be destroyed and won’t be used as evidence in any criminal case, they said.
Generally, government officials determine a family unit to be a parent with a child. Fraud would occur if a person is claiming to be a parent when he or she is another type of relative, or if there was no relationship at all. ICE officials said they have identified 101 possible instances of fraudulent families since April 18 and determined one-third were fraudulent.
Since the beginning of the budget year, they say they have uncovered more than 1,000 cases and referred 45 cases for prosecution. The fraud could also include the use of false birth certificates or documents, and adults accused of fraud aren’t necessarily prosecuted for it; some are prosecuted for illegal entry or other crimes.
Homeland Security officials have also warned of “child recycling,” cases where they say children allowed into the U.S. were smuggled back into Central America to be paired up again with other adults in fake families — something they say is impossible to catch without fingerprints or other biometric data.
Latent fingerprints are left with trace sweats and oils from unique patterns, providing the first great forensic human identifier about a century ago.
One of the few problems, however: the fingermarks can dehydrate over long periods of time. Cold cases may thus be a challenge.
But a team at the Sûreté du Québec police force in Canada has put together a methodology involving fuming, dyes and lasers which produced a clear fingerprint on a challenging plastic bag surface from a double-homicide scene from the 1980s, as they report in the journal Forensic Science International.
“The current case presents a uniqueness due to the age of the revealed fingermark, and the paired success of cyanoacrylate fuming,” writes the Canadian team. “It would thus be of great interest of future cold case analysis using this technique to identify the factors having made this revelation possible.”
The plastic bag was found at the crime scene, and had been preserved in a paper evidence bag for decades.
Since the plastic bag is non-porous, it also further fostered the dehydration process in the roughly 30 years it was left untreated in storage. Its lack of texture also “eased the deposition process.”
The Sûreté forensic experts decided to try fuming with superglue: namely E-Z Bond Instant Glue (Thin), with cyanoacrylate.
Hanging the bag in a sealed cabinet, a small amount of the glue was poured into an aluminum cup, which was then placed near a heater set to 80 degrees Celsius. A 1500 mL beaker with near-boiling water was set at the bottom of the cabinet—and a tube emitting air was placed into the water to bubble it.
For 12 minutes, the process continually attached cyanoacrylate vapors to the residues on the bag within the enclosed space.
Then came the staining.
The solution of Rhodamine 6G and methanol was mixed with a magnetic stirrer until completely dissolved, creating a bright orange mix.
Under a fume hood, the solution was sprayed over of the superglue patterns, and the excess was flushed away with pure methanol.
The treatment process thus rehydrated the marks left from the fingerprint decades before, and locked them in permanent patterns, the experts write.
The evidence was then left to dry in the fume hood, according to the paper.
Once dry, an Arrowhead 532 nm laser was used to examine the patterns. Through orange-stained goggles, pictures are taken with a Nikon D7000 camera mounted with an orange-curved barrier filter.
A good fingerprint was thus produced for the first time from the two homicides.
No suspect has yet matched the fingerprint from the double-murder scene, the team reports. However, the cold-case technique could crack open it and other cases in the near future, they report.
“It also reiterates the importance opening cold cases in order to treat and reassess their exhibits,” they write. “Despite the age of a fingermark, cyanoacrylate combined with rhodamine 6G and visualized with a laser can provide new evidence … This opens the possibility of making an identification and ultimately change the course of the investigation.”
Facial recognition technology works even when only half a face is visible, researchers from the University of Bradford have found.
Using artificial intelligence techniques, the team achieved 100 percent recognition rates for both three-quarter and half faces. The study, published in Future Generation Computer Systems, is the first to use machine learning to test the recognition rates for different parts of the face.
Lead researcher, Professor Hassan Ugail from the University of Bradford said: “The ability humans have to recognise faces is amazing, but research has shown it starts to falter when we can only see parts of a face. Computers can already perform better than humans in recognising one face from a large number, so we wanted to see if they would be better at partial facial recognition as well.”
The team used a machine learning technique known as a “convolutional neural network,” drawing on a feature extraction model called VGG—one of the most popular and widely used for facial recognition.
They worked with a dataset containing multiple photos—2,800 in total—of 200 students and staff from FEI University in Brazil, with equal numbers of men and women.
For the first experiment, the team trained the model using only full facial images They then ran an experiment to see how well the computer was able to recognize faces, even when shown only part of them. The computer recognized full faces 100 percent of the time, but the team also had 100 percent success with three-quarter faces and with the top or right half of the face. However, the bottom half of the face was only correctly recognized 60 percent of the time, and eyes and nose on their own just 40 percent.
They then ran the experiment again, after training the model using partial facial images as well. This time, the scores significantly improved for the bottom half of the face, for eyes and nose on their own and even for faces with no eyes and nose visible, achieving around 90 percent correct identification.
Individual facial parts, such as the nose, cheek, forehead or mouth had low recognition rates in both experiments.