Split-Second ‘Phantom’ Images Can Fool Tesla’s Autopilot

Researchers found they could stop a Tesla by flashing a few frames of a stop sign for less than half a second on an internet-connected billboard. SAFETY CONCERNS OVER automated driver-assistance systems like Tesla’s usually focus on what the car can’t see, like the white side of a truck that one Tesla confused with a bright sky in 2016, leading to the death of a driver. But one group of researchers has been focused on what autonomous driving systems might see that a human driver doesn’t—including “phantom” objects and signs that aren’t really there, which could wreak havoc on the road. Researche...

Read More ...

BGU researchers find it’s easy to get personal data from Zoom screenshots

Scientists warn that users should not post screen images of their video conference sessions on social media Personal data can easi...

Read More ...

Spies Can Eavesdrop by Watching a Light Bulb’s Vibrations

The so-called lamphone technique allows for real-time listening in on a room that’s hundreds of feet away.  THE LIST OF&nbs...

Read More ...

New Malware Jumps Air-Gapped Devices by Turning Power-Supplies into Speakers

Cybersecurity researcher Mordechai Guri from Israel’s Ben Gurion University of the Negev recently demonstrated a new kind of...

Read More ...

Exfiltrating Data from Air-Gapped Computers Using Screen Brightness

It may sound creepy and unreal, but hackers can also exfiltrate sensitive data from your computer by simply changing the brightnes...

Read More ...

How a $300 projector can fool Tesla’s Autopilot

Semi-autonomous driving systems don’t understand projected images. Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv. The spoofing attacks largely rely on the difference between human and AI image recognition. For th...

Read More ...

Actuator Power Signatures Defend Against Additive Manufacturing Sabotage Attacks

Researchers have come together from the US and Israel to study potential threats that could affect additive manufacturing systems....

Read More ...

Did you really ‘like’ that? How Chameleon attacks spring in Facebook, Twitter, LinkedIn

Social networks impacted seem to disagree on the scope of the attack. Social networks are full to the brim with our photos, posts, comments, and likes — the latter of which may be abused by attackers for the purposes of incrimination.  A new paper, titled “The Chameleon Attack: Manipulating Content Display in Online Social Media,” has been published by academics from the Ben-Gurion University of the Negev (BGU), Israel, which suggests inherent flaws in social networks could give rise to a form of “Chameleon” attack.  The team, made up of Aviad Elyashar, Sagi Uziel, Abigail Paradise, and Rami P...

Read More ...

Singapore undergrads cut their teeth in Israel – land of start-ups and tech

Just before a group of Singaporean students made their way to Israel in May, worry hung over their heads. Rockets had been fired f...

Read More ...

New Cyberattack Warning For Millions Of Home Internet Routers: Report

Most routers used at home have a “guest network” feature nowadays, providing friends, visitors and contractors the opt...

Read More ...

Researchers found they could stop a Tesla by flashing a few frames of a stop sign for less than half a second on an internet-connected billboard.

SAFETY CONCERNS OVER automated driver-assistance systems like Tesla’s usually focus on what the car can’t see, like the white side of a truck that one Tesla confused with a bright sky in 2016, leading to the death of a driver. But one group of researchers has been focused on what autonomous driving systems might see that a human driver doesn’t—including “phantom” objects and signs that aren’t really there, which could wreak havoc on the road.

Researchers at Israel’s Ben Gurion University of the Negev have spent the last two years experimenting with those “phantom” images to trick semi-autonomous driving systems. They previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.

“The attacker just shines an image of something on the road or injects a few frames into a digital billboard, and the car will apply the brakes or possibly swerve, and that’s dangerous,” says Yisroel Mirsky, a researcher for Ben Gurion University and Georgia Tech who worked on the research, which will be presented next month at the ACM Computer and Communications Security conference. “The driver won’t even notice at all. So somebody’s car will just react, and they won’t understand why.”

In their first round of research, published earlier this year, the team projected images of human figures onto a road, as well as road signs onto trees and other surfaces. They found that at night, when the projections were visible, they could fool both a Tesla Model X running the HW2.5 Autopilot driver-assistance system—the most recent version available at the time, now the second-most-recent —and a Mobileye 630 device. They managed to make a Tesla stop for a phantom pedestrian that appeared for a fraction of a second, and tricked the Mobileye device into communicating the incorrect speed limit to the driver with a projected road sign.

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video.

The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions. And while they tested their technique on a TV-sized billboard screen on a small road, they say it could easily be adapted to a digital highway billboard, where it could cause much more widespread mayhem.

The Ben Gurion researchers are far from the first to demonstrate methods of spoofing inputs to a Tesla’s sensors. As early as 2016, one team of Chinese researchers demonstrated they could spoof and even hide objects from Tesla’s sensors using radio, sonic, and light-emitting equipment. More recently, another Chinese team found they could exploit Tesla’s lane-follow technology to trick a Tesla into changing lanes just by planting cheap stickers on a road.

“Somebody’s car will just react, and they won’t understand why.”

YISROEL MIRSKY, BEN GURION UNIVERSITY

But the Ben Gurion researchers point out that unlike those earlier methods, their projections and hacked billboard tricks don’t leave behind physical evidence. Breaking into a billboard in particular can be performed remotely, as plenty of hackers have previously demonstrated. The team speculates that the phantom attacks could be carried out as an extortion technique, as an act of terrorism, or for pure mischief. “Previous methods leave forensic evidence and require complicated preparation,” says Ben Gurion researcher Ben Nassi. “Phantom attacks can be done purely remotely, and they do not require any special expertise.”

Neither Mobileye nor Tesla responded to WIRED’s request for comment. But in an email to the researchers themselves last week, Tesla made a familiar argument that its Autopilot feature isn’t meant to be a fully autonomous driving system. “Autopilot is a driver assistance feature that is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time,” reads Tesla’s response. The Ben Gurion researchers counter that Autopilot is used very differently in practice. “As we know, people use this feature as an autopilot and do not keep 100 percent attention on the road while using it,” writes Mirsky in an email. “Therefore, we must try to mitigate this threat to keep people safe, regardless of [Tesla’s] warnings.”

Tesla does have a point, though not one that offers much consolation to its own drivers. Tesla’s Autopilot system depends largely on cameras and, to a lesser extent, radar, while more truly autonomous vehicles like those developed by Waymo, Uber, or GM-owned autonomous vehicle startup Cruise also integrate laser-based lidar, points out Charlie Miller, the lead autonomous vehicle security architect at Cruise. “Lidar would not have been susceptible to this type of attack,” says Miller. “You can change an image on a billboard and lidar doesn’t care, it’s measuring distance and velocity information. So these attacks wouldn’t have worked on most of the truly autonomous cars out there.”

The Ben Gurion researchers didn’t test their attacks against those other, more multi-sensor setups. But they did demonstrate ways to detect the phantoms they created even on a camera-based platform. They developed a system they call “Ghostbusters” that’s designed to take into account a collection of factors like depth, light, and the context around a perceived traffic sign, then weigh all those factors before deciding whether a road sign image is real. “It’s like a committee of experts getting together and deciding based on very different perspectives what this image is, whether it’s real or fake, and then making a collective decision,” says Mirsky. The result, the researchers say, could far more reliably defeat their phantom attacks, without perceptibly slowing down a camera-based autonomous driving system’s reactions.

Ben Gurion’s Nassi concedes that the Ghostbuster system isn’t perfect, and he argues that their phantom research shows the inherent difficulty in making autonomous driving decisions even with multiple sensors like a Tesla’s combined radar and camera. Tesla, he says, has taken a “better safe than sorry” approach that trusts the camera alone if it shows an obstacle or road sign ahead, leaving it vulnerable to their phantom attacks. But an alternative might disregard hazards if one or more of a vehicle’s sensors misses them. “If you implement a system that ignores phantoms if they’re not validated by other sensors, you will probably have some accidents,” says Nassi. “Mitigating phantoms comes with a price.”

Cruise’s Charlie Miller, who previously worked on autonomous vehicle security at Uber and Chinese self-driving car firm Didi Chuxing, counters that truly autonomous, lidar-enabled vehicles have in fact managed to solve that problem. “Attacks against sensor systems are interesting, but this isn’t a serious attack against the systems I’m familiar with,” such as Uber and Cruise vehicles, Miller says. But he still sees value in Ben Gurion’s work. “It’s something we need to think about and work on and plan for. These cars rely on their sensor inputs, and we need to make sure they’re trusted.”

Source: Wired.com

Scientists warn that users should not post screen images of their video conference sessions on social media

Members of the city commission to prevent the spread of coronavirus disease (COVID-19) vote during a meeting via Zoom video link in Lviv, Ukraine March 26, 2020.
(photo credit: REUTERS/ROMAN BALUK)

Personal data can easily be extracted from Zoom and other video conference applications, researchers from Ben Gurion University of the Negev announced today.

The Israeli researchers examined Zoom, Microsoft Teams and Google Meet and warned that users should not post screen images of their video conference sessions on social media as it was easy to identify people from these shots.

“The findings in our paper indicate that it is relatively easy to collect thousands of publicly available images of video conference meetings and extract personal information about the participants, including their face images, age, gender, and full names,” said Dr. Michael Fire, BGU Department of Software and Information Systems Engineering (SISE).

He added that “This type of extracted data can vastly and easily jeopardize people’s security and privacy, affecting adults as well as young children and the elderly.”

The researchers found that it is possible to extract private information from collage images of meeting participants posted on Instagram and Twitter. They used image processing text recognition tools as well as social network analysis to explore the dataset of more than 15,700 collage images and more than 142,000 face images of meeting participants.

Artificial intelligence-based image-processing algorithms helped identify the same individual’s participation at different meetings by simply using either face recognition or other extracted user features like the image background.

The researchers were able to spot faces 80% of the time as well as detect gender and estimate age. Free web-based text recognition libraries allowed the BGU researchers to correctly determine nearly two-thirds of usernames from screenshots.

The researchers identified 1,153 people who likely appeared in more than one meeting, as well as networks of Zoom users in which all the participants were coworkers.“

This proves that the privacy and security of individuals and companies are at risk from data exposed on video conference meetings,” according to the research team which also includes BGU SISE researchers Dima Kagan and Dr. Galit Fuhrmann Alpert.

Additionally, the researchers offered a number of recommendations on how to prevent privacy and security attacks. These include not posting video conference images online, or sharing videos; using generic pseudonyms like “iZoom” or “iPhone” rather than a unique username or real name; and using a virtual background vs. a real background since it can help fingerprint a user account across several meetings.

Additionally, the team advised video conferencing operators to augment their platforms with a privacy mode such as filters or Gaussian noise to an image, which can disrupt facial recognition while keeping the face still recognizable.

“Since organizations are relying on video conferencing to enable their employees to work from home and conduct meetings, they need to better educate and monitor a new set of security and privacy threats,” Fire added. “Parents and children of the elderly also need to be vigilant, as video conferencing is no different than other online activity.”

Source: Jerusalem Post

The so-called lamphone technique allows for real-time listening in on a room that’s hundreds of feet away. 

THE LIST OF sophisticated eavesdropping techniques has grown steadily over years: wiretaps, hacked phones, bugs in the wall—even bouncing lasers off of a building’s glass to pick up conversations inside. Now add another tool for audio spies: Any light bulb in a room that might be visible from a window.

Researchers from Israeli’s Ben-Gurion University of the Negev and the Weizmann Institute of Science today revealed a new technique for long-distance eavesdropping they call “lamphone.” They say it allows anyone with a laptop and less than a thousand dollars of equipment—just a telescope and a $400 electro-optical sensor—to listen in on any sounds in a room that’s hundreds of feet away in real-time, simply by observing the minuscule vibrations those sounds create on the glass surface of a light bulb inside. By measuring the tiny changes in light output from the bulb that those vibrations cause, the researchers show that a spy can pick up sound clearly enough to discern the contents of conversations or even recognize a piece of music.

“Any sound in the room can be recovered from the room with no requirement to hack anything and no device in the room,” says Ben Nassi, a security researcher at Ben-Gurion who developed the technique with fellow researchers Yaron Pirutin and Boris Zadov, and who plans to present their findings at the Black Hat security conference in August. “You just need line of sight to a hanging bulb, and this is it.”

In their experiments, the researchers placed a series of telescopes around 80 feet away from a target office’s light bulb, and put each telescope’s eyepiece in front of a Thorlabs PDA100A2 electro-optical sensor. They then used an analog-to-digital converter to convert the electrical signals from that sensor to digital information. While they played music and speech recordings in the faraway room, they fed the information picked up by their set-up to a laptop, which analyzed the readings.

The researchers’ experimental setup, with an electro-optical sensor behind the eyepiece of a telescope, pointing at a lightbulb inside an office building more than 80 feet away.
COURTESY OF BEN NASSI

The researchers found that the tiny vibrations of the light bulb in response to sound—movements that they measured at as little as a few hundred microns—registered as a measurable changes in the light their sensor picked up through each telescope. After processing the signal through software to filter out noise, they were able to reconstruct recordings of the sounds inside the room with remarkable fidelity: They showed, for instance, that they could reproduce an audible snippet of a speech from President Donald Trump well enough for it to be transcribed by Google’s Cloud Speech API. They also generated a recording of the Beatles’ “Let It Be” clear enough that the name-that-tune app Shazam could instantly recognize it.Most Popular

The technique nonetheless has some limitations. In their tests, the researchers used a hanging bulb, and it’s not clear if a bulb mounted in a fixed lamp or a ceiling fixture would vibrate enough to derive the same sort of audio signal. The voice and music recordings they used in their demonstrations were also louder than the average human conversation, with speakers turned to their maximum volume. But the team points out that they also used a relatively cheap electro-optical sensor and analog-to-digital converter, and could have upgraded to a more expensive one to pick up quieter conversations. LED bulbs also offer a signal-to-noise ratio that’s about 6.3 times that of an incandescent bulb and 70 times a fluorescent one.

Regardless of those caveats, Stanford computer scientist and cryptographer Dan Boneh argues that the researchers’ technique still represents a significant and potentially practical new form of what he calls a “side channel” attack—one that takes advantage of unintended leakage of information to steal secrets. “It’s a beautiful application of side channels,” Boneh says. “Even if this requires a hanging bulb and high decibels, it’s still super interesting. And it’s still just the first time this has been shown to be possible. Attacks only get better, and future research will only improve this over time.”

“You just need line of sight to a hanging bulb.”

BEN NASSI, BEN-GURION UNIVERSITY OF THE NEGEV

The research team, which was advised by BGU’s Yuval Elovici and Adi Shamir, the coinventor of the ubiquitous RSA encryption system, isn’t the first to show that unexpected sonic phenomena can enable eavesdropping. Researchers have known for years that a laser bounced off a target’s window can allow spies to pick up the sounds inside. Another group of researchers showed in 2014 that the gyroscope of a compromised smartphone can pick up sounds even if the malware can’t access its microphone. The closest previous technique to lamphone is what MIT, Microsoft, and Adobe researchers in 2014 called a “visual microphone”: By analyzing video recorded via telescope of an object in a room that picks up vibrations—a bag of potato chips or a houseplant, for instance—those researchers were able to reconstruct speech and music.

But Nassi points out that the video-based technique, while far more versatile since it doesn’t require a bulb to be visible in the room, requires analysis of the video with software after it’s recorded to convert the subtle vibrations observed in an object into the sounds it picked up. Lamphone, by contrast, enables real-time spying. Since the vibrating object is itself a light source, the electro-optical sensor can pick up vibrations in far simpler visual data.

That could make lamphone significantly more practical for use in espionage than previous techniques, Nassi argues. “When you actually use it in real time you can respond in real time rather than losing the opportunity,” he says.Most Popular

Still, Nassi says the researchers are publishing their findings not to enable spies or law enforcement, but to make clear to those on both sides of surveillance what’s possible. “We want to raise the awareness of this kind of attack vector,” he says. “We’re not in the game of providing tools.”

As unlikely as being targeted by this technique is, it’s also easy to forestall. Just cover any hanging bulbs, or better yet, close the curtains. And if you’re paranoid enough to be concerned about this sort of spy game, hopefully you’ve already used anti-vibration devices on those windows to prevent eavesdropping with a laser microphone. And swept your house for bugs. And removed the microphones from your phone and computer. After all, in an era when even the light bulbs have ears, a paranoiac’s work is never done.

Source: WIRED

Cybersecurity researcher Mordechai Guri from Israel’s Ben Gurion University of the Negev recently demonstrated a new kind of malware that could be used to covertly steal highly sensitive data from air-gapped and audio-gapped systems using a novel acoustic quirk in power supply units that come with modern computing devices.

Dubbed ‘POWER-SUPPLaY,’ the latest research builds on a series of techniques leveraging electromagnetic, acoustic, thermal, optical covert channels, and even power cables to exfiltrate data from non-networked computers.

“Our developed malware can exploit the computer power supply unit (PSU) to play sounds and use it as an out-of-band, secondary speaker with limited capabilities,” Dr. Guri outlined in a paper published today and shared with The Hacker News.

“The malicious code manipulates the internal switching frequency of the power supply and hence controls the sound waveforms generated from its capacitors and transformers.”

“We show that our technique works with various types of systems: PC workstations and servers, as well as embedded systems and IoT devices that have no audio hardware. Binary data can be modulated and transmitted out via the acoustic signals.”

Using Power Supply as an Out-of-Band Speaker

Air-gapped systems are considered a necessity in environments where sensitive data is involved in an attempt to reduce the risk of data leakage. The devices typically have their audio hardware disabled so as to prevent adversaries from leveraging the built-in speakers and microphones to pilfer information via sonic and ultrasonic waves.

It also necessitates that both the transmitting and receiving machines be located in close physical proximity to one another and that they are infected with the appropriate malware to establish the communication link, such as through social engineering campaigns that exploit the target device’s vulnerabilities.

POWER-SUPPLaY functions in the same way in that the malware running on a PC can take advantage of its PSU and use it as an out-of-band speaker, thus obviating the need for specialized audio hardware.

“This technique enables playing audio streams from a computer even when audio hardware is disabled, and speakers are not present,” the researcher said. “Binary data can be modulated and transmitted out via the acoustic signals. The acoustic signals can then be intercepted by a nearby receiver (e.g., a smartphone), which demodulates and decodes the data and sends it to the attacker via the Internet.”

Put differently, the air-gap malware regulates the workload of modern CPUs to control its power consumption and the switching frequency of the PSU to emit an acoustic signal in the range of 0-24kHz and modulate binary data over it.

Air-Gap Bypass and Cross-Device Tracking

The malware in the compromised computer, then, not only amasses sensitive data (files, URLs, keystrokes, encryption keys, etc.), it also transmits data in WAV format using the acoustic sound waves emitted from the computer’s power supply, which is decoded by the receiver — in this case, an app running on an Android smartphone.

According to the researcher, an attacker can exfiltrate data from audio-gapped systems to the nearby phone located 2.5 meters away with a maximal bit rate of 50 bit/sec.

One privacy-breaking consequence of this attack is cross-device tracking, as this technique enables the malware to capture browsing history on the compromised system and broadcast the information to the receiver.

As a countermeasure, the researcher suggest zoning sensitive systems in restricted areas where mobile phones and other electronic equipment are banned. Having an intrusion detection system to monitor suspicious CPU behavior, and setting up hardware-based signal detectors and jammers could also help defend against the proposed covert channel.

With air-gapped nuclear facilities in Iran and India the target of security breaches, the new research is yet another reminder that complex supply chain attacks can be directed against isolated systems.

“The POWER-SUPPLaY code can operate from an ordinary user-mode process and doesn’t need hardware access or root-privileges,” the researcher concluded. “This proposed method doesn’t invoke special system calls or access hardware resources, and hence is highly evasive.”

Source: The Hacker News

It may sound creepy and unreal, but hackers can also exfiltrate sensitive data from your computer by simply changing the brightness of the screen, new cybersecurity research shared with The Hacker News revealed.

In recent years, several cybersecurity researchers demonstrated innovative ways to covertly exfiltrate data from a physically isolated air-gapped computer that can’t connect wirelessly or physically with other computers or network devices.

These clever ideas rely on exploiting little-noticed emissions of a computer’s components, such as light, soundheatradio frequencies, or ultrasonic waves, and even using the current fluctuations in the power lines.

For instance, potential attackers could sabotage supply chains to infect an air-gapped computer, but they can’t always count on an insider to unknowingly carry a USB with the data back out of a targeted facility.

When it comes to high-value targets, these unusual techniques, which may sound theoretical and useless to many, could play an important role in exfiltrating sensitive data from an infected but air-gapped computer.

How Does the Brightness Air-Gapped Attack Work?

In his latest research with fellow academics, Mordechai Guri, the head of the cybersecurity research center at Israel’s Ben Gurion University, devised a new covert optical channel using which attackers can steal data from air-gapped computers without requiring network connectivity or physically contacting the devices.

“This covert channel is invisible, and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys, and passwords), and modulate it within the screen brightness, invisible to users,” the researchers said.

The fundamental idea behind encoding and decoding of data is similar to the previous cases, i.e., malware encodes the collected information as a stream of bytes and then modulate it as ‘1’ and ‘0’ signal.

In this case, the attacker uses small changes in the LCD screen brightness, which remains invisible to the naked eye, to covertly modulate binary information in morse-code like patterns

“In LCD screens each pixel presents a combination of RGB colors which produce the required compound color. In the proposed modulation, the RGB color component of each pixel is slightly changed.”

“These changes are invisible, since they are relatively small and occur fast, up to the screen refresh rate. Moreover, the overall color change of the image on the screen is invisible to the user.”

The attacker, on the other hand, can collect this data stream using video recording of the compromised computer’s display, taken by a local surveillance camera, smartphone camera, or a webcam and can then reconstruct exfiltrated information using image processing techniques.

As shown in the video demonstration shared with The Hacker News, researchers infected an air-gapped computer with specialized malware that intercepts the screen buffer to modulate the data in ASK by modifying the brightness of the bitmap according to the current bit (‘1’ or ‘0’).

You can find detailed technical information on this research in the paper [PDF] titled, ‘BRIGHTNESS: Leaking Sensitive Data from Air-Gapped Workstations via Screen Brightness,’ published yesterday by Mordechai Guri, Dima Bykhovsky and Yuval Elovici.

Air-Gapped Popular Data Exfiltration Techniques

It’s not the first time Ben-Gurion researchers came up with a covert technique to target air-gapped computers. Their previous research of hacking air-gap machines include:

  • PowerHammer attack to exfiltrate data from air-gapped computers through power lines.
  • MOSQUITO technique using which two (or more) air-gapped PCs placed in the same room can covertly exchange data via ultrasonic waves.
  • BeatCoin technique that could let attackers steal private encryption keys from air-gapped cryptocurrency wallets.
  • aIR-Jumper attack that takes sensitive information from air-gapped computers with the help of infrared-equipped CCTV cameras that are used for night vision.
  • MAGNETO and ODINI techniques use CPU-generated magnetic fields as a covert channel between air-gapped systems and nearby smartphones.
  • USBee attack that can be used to steal data from air-gapped computers using radio frequency transmissions from USB connectors.
  • DiskFiltration attack that can steal data using sound signals emitted from the hard disk drive (HDD) of the targeted air-gapped computer;
  • BitWhisper that relies on heat exchange between two computer systems to stealthily siphon passwords or security keys;
  • AirHopper that turns a computer’s video card into an FM transmitter to capture keystrokes;
  • Fansmitter technique that uses noise emitted by a computer fan to transmit data; and
  • GSMem attack that relies on cellular frequencies.

Source: The Hacker News

Semi-autonomous driving systems don’t understand projected images.

Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.

The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driver—in fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.

Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called “Phantom of the ADAS,” and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don’t necessarily agree with the spin Nassi puts on his work—for the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.

Nassi and his team’s spoofing of the Model X was carried out with a human assistant holding a projector, due to drone laws in the country where the experiments were carried out. But the spoof could have also been carried out by drone, as his earlier spoofing attacks on a Mobileye driver-assistance system were.

From a security perspective, the interesting angle here is that the attacker never has to be at the scene of the attack and doesn’t need to leave any evidence behind—and the attacker doesn’t need much technical expertise. A teenager with a $400 drone and a battery-powered projector could reasonably pull this off with no more know-how than “hey, it’d be hilarious to troll cars down at the highway, right?” The equipment doesn’t need to be expensive or fancy—Nassi’s team used several $200-$300 projectors successfully, one of which was rated for only 854×480 resolution and 100 lumens.

Of course, nobody should be letting a Tesla drive itself unsupervised in the first place—Autopilot is a Level 2 Driver Assistance System, not the controller for a fully autonomous vehicle. Although Tesla did not respond to requests for comment on the record, the company’s press kit describes Autopilot very clearly (emphasis ours):

Autopilot is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time. While Autopilot is designed to become more capable over time, in its current form, it is not a self-driving system, it does not turn a Tesla into an autonomous vehicle, and it does not allow the driver to abdicate responsibility. When used properly, Autopilot reduces a driver’s overall workload, and the redundancy of eight external cameras, radar and 12 ultrasonic sensors provides an additional layer of safety that two eyes alone would not have.

Even the name “Autopilot” itself isn’t as inappropriate as many people assume—at least, not if one understands the reality of modern aviation and maritime autopilot systems in the first place. Wikipedia references the FAA’s Advanced Avionics Handbook when it defines autopilots as “systems that do not replace human operators, [but] instead assist them in controlling the vehicle.” On the first page of the Advanced Avionics Handbook’s chapter on automated flight control, it states: “In addition to learning how to use the autopilot, you must also learn when to use it and when not to use it.”

Within these constraints, even the worst of the responses demonstrated in Nassi’s video—that of the Model X swerving to follow fake lane markers on the road—doesn’t seem so bad. In fact, that clip demonstrates exactly what should happen: the owner of the Model X—concerned about what the heck his or her expensive car might do—hit the brakes and took control manually after Autopilot went in an unsafe direction.

The problem is, there’s good reason to believe that far too many drivers don’t believe they really need to pay attention. A 2019 survey demonstrated that nearly half of the drivers polled believed it was safe to take their hands off the wheel while Autopilot is on, and six percent even thought it was OK to take a nap. More recently, Sen. Edward Markey (D-Mass.) called for Tesla to improve the clarity of its marketing and documentation, and Democratic presidential candidate Andrew Yang went hands-free in a campaign ad—just as Elon Musk did before him, in a 2018 60 Minutes segment.

The time may have come to consider legislation about drones and projectors specifically, in much the same way laser pointers were regulated after they became popular and cheap. Some of the techniques used in the spoofing attacks carried out here could also confuse human drivers. And although human drivers are at least theoretically available, alert, and ready to take over for any confused AI system today, that won’t be the case forever. It would be a good idea to start work on regulations prohibiting spoofing of vehicle sensors before we no longer have humans backing them up.

Source: Ars Technica

Researchers have come together from the US and Israel to study potential threats that could affect additive manufacturing systems. While this does not affect the DIY user fabricating parts and prototypes from the home workshop as much, the authors of ‘Detecting Sabotage Attacks in Additive Manufacturing Using Actuator Power Signatures’ are concerned about ‘AM’s dependence on computerization’ and the need for the protection against sabotage in more complex, industrial settings.

The researchers have created a reliable way to detect sabotage, offering a simple monitoring program that they describe as non-invasive and easy to retrofit for older systems too. Undeniably, there are numerous applications that require protection as research, development and manufacturing progress—especially when they involve safety-critical systems like aerospace components, medical devices, tooling, and more.

As industrial users become further dependent on computerized methods and additive manufacturing, there is ‘growing concern that the AM process can be tampered with.’

If a scenario were to arise where an outside force gained control over AM equipment, a ‘complete sabotage attack’ could occur. This is not a new idea, exactly—and users, manufacturers, and researchers have studied other topics such as 3D printer security threats with drones as examples, new ways to verify integrity of 3D printed objects, and even the possibility of 3D printing threats to nuclear program security. So far, however, the researchers state the need for this study and development of new safety systems as ‘to date, only defensive methods for FDM machines have been verified.’

With the mindset that any computer connected to AM workflow could be ‘compromised,’ the authors explain that analog actuators like stepper motors, however, could not be tampered with via adversarial cyber activity.

Destructive testing results can also validate safety, working with the new detection system comprised of:

  • Induction probes
  • Oscilloscope
  • Monitoring PC for data analysis
Power Traces of X, Y, Z, and Extruder motors (top to bottom).

For FDM 3D printing, the actuators consist of four motors manipulating movement and extrusion of the printhead. Each one is directly driven, with movement of motors dependent on the electrical current:

“A DC supply (if low-amplitude noise is ignored) indicates the controlled motor is stationary,” explain the researchers. “When sinusoidal current is applied to a stepper motor, each oscillation induces a magnetic field in a set of the motor’s internal coils. This magnetic field draws the geared iron rotor forward by one step. Its rotational speed is controlled by the oscillation frequency. The drive direction can be changed by reversing the current through a set of coils.”

As they modified designs, that meant that changes also were made to currents supplying the actuators—also allowing for any ‘deviations’ to be monitored and detected in comparison. Signals are separated from each motor in the detection system, as they can examine each motor separately.

Normal and sabotaged traces, X motor.

‘Discrete discretion’ is used regarding each printed layer, to be identified whether they have been added, deleted, or changed; and in fact, layer transitions must be a focus as they can be used by any possible adversaries as a time to manipulate filament negatively or launch other AM attacks. Ultimately, however, the researchers realized that new parameters are necessary for monitoring layer transitions because of the limited time period available:

“Under current settings, only two or three consecutive windows for each layer transition will be produced. This is insufficient to reliably distinguish between malicious modifications and stochastic anomalies.”

Power consumption signature generation.

Signature generation and verification play a central role in the new system, allowing users to confirm that parts have not been tampered with as the master signature is examined in comparison to signatures ‘generated from the current traces of the investigated process.’

Power consumption signature verification.

Overall, this new method can detect sabotage, along with ‘void-insertion attacks.’ Any changes to G-code movement commands are noticed one hundred percent of the time within this new system, to include the X and Y motors, and Z movements.

“In our future work, we plan to overcome the identified limitations, including the restriction to open-loop AM systems, and accounting for the gradual accumulation of deviations. We will also test the method against other FDM printers, and adapt it for other printing technologies, such as Powder Bed Fusion and Directed Energy Deposition,” concluded the researchers. “The demonstrated anomaly detection performance and the potential applicability to metal AM systems makes the proposed approach an important milestone to ensure AM security in safety-critical systems.”

Experimental environment

Source: 3DPrintBoard.com

Social networks impacted seem to disagree on the scope of the attack.

Social networks are full to the brim with our photos, posts, comments, and likes — the latter of which may be abused by attackers for the purposes of incrimination. 

A new paper, titled “The Chameleon Attack: Manipulating Content Display in Online Social Media,” has been published by academics from the Ben-Gurion University of the Negev (BGU), Israel, which suggests inherent flaws in social networks could give rise to a form of “Chameleon” attack. 

The team, made up of Aviad Elyashar, Sagi Uziel, Abigail Paradise, and Rami Puzis from the Telekom Innovation Laboratories and Department of Software and Information Systems Engineering, says that weaknesses in how posting systems are used on Facebook, Twitter and LinkedIn as well as other social media platforms, can be exploited to tamper with user activity in a way that could be “completely different, detrimental and potentially criminal.”

According to the research, published on arXiv.org, an interesting design flaw — rather than a security vulnerability, it should be noted — means that content including posts can be edited and changed without users that may have liked or commented being made aware of any shifts. 

Content containing redirect links, too, shortened for the purposes of brand management and to account for word count restrictions, may be susceptible and changed without notice. 

During experiments, the researchers used the Chameleon method to change publicly-posted videos on Facebook. Comments and like counts stayed the same, but there is no indication of alterations made available to anyone who previously interacted with the content. 

“Imagine watching and ‘liking’ a cute kitty video in your Facebook feed and a day later a friend calls to find out why you ‘liked’ a video of an ISIS execution,” says Dr. Rami Puzis, a researcher in the BGU Department of Software and Information Systems Engineering. “You log back on and find that indeed there’s a ‘like’ there. The repercussions from indicating support by liking something you would never do (Biden vs. Trump, Yankees vs. Red Sox, ISIS vs. US) from employers, friends, family, or government enforcement unaware of this social media scam can wreak havoc in just minutes.”  

Scams come to mind first, but in a world where propaganda, fake news, and troll farming runs rampant across social networks — the alleged interference of Russia in the previous US election being a prime example — as well as the close ties between our physical and digital identities, these design weaknesses may have serious ramifications for users. 

In a hypothetical attack scenario, the researchers say that a target could be selected and reconnaissance across a social network performed. Acceptable posts and links could then be created to “build trust” with an unaware victim — or group — before the switch is made via a Chameleon attack, quickly altering the target’s viewable likes and comments to relate to other content. 

“First and foremost, social network Chameleons can be used for shaming or incrimination, as well as to facilitate the creation and management of fake profiles in social networks,” Puzis says. “They can also be used to evade censorship and monitoring, in which a disguised post reveals its true self after being approved by a moderator.”

When contacted by the team, Facebook dismissed any concerns, labeling the issue as a phishing attack and therefore “such issues do not qualify under our bug bounty program.”

The LinkedIn team has begun an investigation. 

Both Facebook and LinkedIn, however, have partial mitigation in place as an icon is set when content is edited post-publication.

Twitter said the behavior was reported to the microblogging platform in the past, saying “while it may not be ideal, at this time, we do not believe this poses more of a risk than the ability to tweet a URL of any kind since the content of any web page may also change without warning.”

WhatsApp and Instagram are not generally susceptible to these attacks, whereas Reddit and Flickr may be.

“On social media today, people make judgments in seconds, so this is an issue that requires solving, especially before the upcoming US election,” Puzis says. 
 
The research will be presented in April at The Web Conference in Taipei, Taiwan. 

Source: ZDNet

Just before a group of Singaporean students made their way to Israel in May, worry hung over their heads.

Rockets had been fired from the Gaza Strip, hitting the southern Israeli city of Beersheba, where a few of the students from the Singapore University of Technology and Design (SUTD) were headed for an internship. No injury or death was reported, but news of the missile attacks made some of the students’ families a little uneasy.

Still, with reassurance from the university, the seven students, aged 20 to 23, went ahead with the trip.

Prior to their departure, SUTD kept in touch with the International SOS and held briefings for the students on developments in Israel and drills in case of emergencies.

None of the students heard any rocket sirens in the four months they were there.

They were the first batch of SUTD students to experience first-hand work in Israel, dubbed the land of start-ups and technology.

The seven of them belonged to either the information systems technology and design, or engineering systems and design specialisations.

Split across two start-ups and a research lab, they spent the past four months learning what it takes to be entrepreneurs and exploring research in cyber security. Their work stint ended last month.

SUTD had earlier this year established partnerships with two renowned Israeli universities – IDC Herzliya in Tel Aviv and Ben Gurion University (BGU) in Beersheba – making it the fourth Singapore university to send students to Israel for work or an exchange programme.

IN THE LAND OF CYBER DEFENCE

Israel is known for its Iron Dome, a missile defence system, but it has also been developing its cyber-security technologies.

At the forefront of these efforts is the Cyber@BGU, a research lab that delves into all sorts of projects on cyber security, big data analytics and applied research across fields.

This was also where three Singaporean SUTD students had the chance to work in, thanks to Professor Yuval Elovici, who wears two hats as the head of two cyber-security labs, one at SUTD and the other at Cyber@BGU.

Prof Yuval said that Beersheba, about 100km away from Tel Aviv, was named as Israel’s cyber capital, and the research lab is part of a larger high-tech park with large companies and start-ups.

By next year, about 4,000 tech experts will be working in the area, and eventually, this will grow to about 10,000, when the Israeli military moves its logistics, cyber-defence and technology units to the “smart campus”.

Third-year student Kevin Yee, 23, found it exciting to be in the heart of Beersheba. His junior, Mr Suhas Sahu, 22, worked on research to streamline huge chunks of data, and speed up detection of abnormalities in network traffic. The second-year student also started writing a research paper on the project for the lab.

For second-year student Sithanathan Bhuvaneswari, 20, her project was on the detection of tampered MRI scans, which were injected with fake cancer cells.

“Previously our only reference of cyber security was hacking in movies. But these projects about protecting identity and networks make everything more real, and it opens your eyes to the opportunities out there.”

A HOTBED FOR START-UPS

The other students were in Tel Aviv, the heart of Israel’s booming start-up scene.

Third-year student Eunice Lee, 21, was an intern at Pick a Pier, a company set up to make booking of docking spaces more efficient for marinas and boaters.

She was asked to create a smart algorithm to better match the supply of these spaces to the demand.

“I feel empowered in a start-up, where I have to be more accountable for my own work and make sure nothing goes wrong.”

She enjoyed her experience in Israel so much that she is even considering returning to work there full time after graduation.

In another part of the city, three students were based at I Know First, a fintech start-up that uses an advanced algorithm to analyse and predict the global stock market.

Their job roles included writing reports on the company’s predictive technology, comparing data with historical prices, and creating programming scripts to speed up data processing for the firm’s website.

Third-year student Jireh Tan, 23, who also helped to redesign the website, said: “I’ve always wanted to see what it’s like to work in a small environment, to really feel the responsibility of what I do every day.”

His peer Clarence Toh, 22, said: “Because start-ups are so small, you can see how they function and how they stay lean. Through this experience, I’ve seen how starting a business is not impossible, if you have a product that can meet demand.”

The Singapore University of Technology and Design (SUTD) ramped up overseas work opportunities for students in the past two years.
Overall demand to get work experience abroad has risen, with about 30 per cent more students going for overseas internships. This year, 103 students applied for 120 such positions, double the number of applicants last year.
Eventually, 47 students took up internships in 14 countries abroad this year. Some had either declined the positions offered or gone for local internships.
There is funding from SUTD and Enterprise Singapore for some internships abroad, and some companies and partners provide a small stipend, lodging, or meal and transport allowance.
Israel is one of the newest destinations for SUTD, which is Singapore’s fourth autonomous university.
SUTD president Chong Tow Chong said the university’s culture of entrepreneurship has made students more interested in start-ups.
“This has prompted us to expand our network of partnerships with start-up companies worldwide,” he said.
Eligible students who go on a four-month internship stint in Israel receive a sum of $5,400 from the Young Talent Programme grant, which is co-funded by Enterprise Singapore and the university.
The others, such as the National University of Singapore and Nanyang Technological University, have work programmes in Israel for their students, while the Singapore University of Social Sciences has shorter learning trips to the country.

Amelia Teng

Source: The Straits Times

Most routers used at home have a “guest network” feature nowadays, providing friends, visitors and contractors the option to get online without (seemingly) giving them access to a core home network. Unfortunately, a new report from researchers at Israel’s Ben-Gurion University of the Negev has now suggested that enabling such guest networks introduces a critical security vulnerability.

The advice from the researchers is to disable any guest networks—if you must have multiple networks at home, they warn, use separate hardware devices.

The implication isn’t that your plumber or telephone engineer might be in the employ of Iranian hackers, so don’t let them online—it is that the architecture of the router has a core vulnerability, one that enables contamination between its secure and less secure networks. The issue is more likely to hit through a printer or IoT device that has basic in-house access, but which you don’t think has access to the internet.

Because there is this contamination within the router itself, an attack on either network could open the other network to data leaks or the planting of a malicious hack. This means an attack on a poorly secured guest network would allow data to be harvested from the core network and delivered to a threat actor over the internet. None of which would be caught by the software-based defensive solutions in place.

The research team exposed the vulnerability by “overcoming” the logical network isolation between the two different networks “using specially-crafted network traffic.” In this way, it was possible to make the channels “leak data between the host network and the guest network,” and the report warns that an attack is possible even where an attacker “has very limited permissions on the infected device, and even an iframe hosting malicious JavaScript code can be used for this purpose.”

The methods did not enable the researchers to pull large amounts of device, but did break the security system and open the door. A targeted attack might only be looking for certain data, medical information or credentials for example. The vulnerability enables such an attack even where a guest network is not connected to the internet, but might have internal-only connectivity, the attack would then jump the fence and provide data to the outside actor.

What this means in practice is overloading the router such that it falls back on its covert internal architecture in an attempt to measure and manage its own performance. “Blocking this form of data transfer is more difficult, since it may require architectural changes to the router.” The researchers claim shared hardware resources must be made available to both networks for the router to function.

The same issue impacts businesses operating multiple networks without physical network separation—but organisational network security introduces other vulnerabilities around numbers of sign-ons and different levels of sensitivity. Air-gaps and access point control is on a different level to what is being reported here. But with almost all popular routers now offering the convenience of guest networks and with the researchers warning that “all of the routers surveyed—regardless of brand or price point—were vulnerable to at least some cross-network communication,” this is an issue that should concern home users first and foremost.

And while software tools can be deployed to plug some of the gaps uncovered, the researchers believe that to close the vulnerability without shutting down the functionality would require “a hardware-based solution—guaranteeing isolation between secure and non-secure network devices.” There is simply no way to guarantee security without hardware separation of the different networks.

As billions of new IoT devices are bought and connected, the levels of security in our homes and businesses becomes more critical and more difficult to manage. The bottom line here is that even providing restricted access to an IoT devices that might not seem to have any external connectivity could still allow that device to attack the core host network. And given that most of those IoT devices will be connected and forgotten and—dare I say it—made in China, that is an exposure.

The vendors of the tested hardware have been informed of the research findings—we await to see if any changes follow.

In the meantime, is your guest network under attack from foreign or domestic agents—should you panic and pull the plug? Of course not. But there is a vulnerability—it’s real and it has been tested and reported. The software-based network isolation used by your router, simply put, is not bulletproof and it should be. And so the advice is the same as it would be anyway—give some thought to whether a guest network is needed and to what devices and which people connect to your system.

About Us

Cyber@BGU is an umbrella organization at Ben Gurion University, being home to various cyber security, big data analytics and AI applied research activities.Residing in newly established R&D center at the new Hi-Tech park of Beer Sheva (Israel’s Cyber Capital), Cyber@BGU serves as a platform for the most innovative and technologically challenging projects with various industrial and governmental partners.

Latest Publications

The Creation and Detection of Deepfakes: A Survey

Yisroel Mirsky, Wenke Lee

Ben-Gurion University and Georgia Institute of Technology, May 2020

The Creation and Detection of Deepfakes: A Survey

Yisroel Mirsky, Wenke Lee

Ben-Gurion University and Georgia Institute of Technology, May 2020

A deepfake is content generated by artificial intelligence which seems authentic in the eyes of a human being. The word deepfake is a combination of the words ‘deep learning’ and ‘fake’ and primarily relates to content generated by an artificial neural network, a branch of machine learning.

The most common form of deepfakes involves the generation and manipulation of human imagery. This technology has creative and productive applications. For example, realistic video dubbing of foreign films, education through the reanimation of historical figures, and virtually trying on clothes while shopping. There are also numerous online communities devoted to creating deepfake memes for entertainment, such as music videos portraying the face of actor Nicolas Cage.

However, despite the positive applications of deepfakes, the technology is infamous for its unethical and malicious capabilities. At the end of 2017, a Reddit user by the name of ‘deepfakes’ used deep learning to swap faces of celebrities into pornographic videos and posted them online.

Link

Deployment Optimization of IoT Devices through Attack Graph Analysis

Noga Agmon, Asaf Shabtai, Rami Puzis

Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, 11 Apr 2019

Deployment Optimization of IoT Devices through Attack Graph Analysis

Noga Agmon, Asaf Shabtai, Rami Puzis

Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, 11 Apr 2019

The Internet of things (IoT) has become an integral part of our life
at both work and home. However, these IoT devices are prone to vulnerability exploits due to their low cost, low resources, the diversity
of vendors, and proprietary firmware. Moreover, short range communication protocols (e.g., Bluetooth or ZigBee) open additional
opportunities for the lateral movement of an attacker within an organization. Thus, the type and location of IoT devices may significantly
change the level of network security of the organizational network.
In this paper, we quantify the level of network security based on
an augmented attack graph analysis that accounts for the physical
location of IoT devices and their communication capabilities. We
use the depth-first branch and bound (DFBnB) heuristic search algorithm to solve two optimization problems: Full Deployment with
Minimal Risk (FDMR) and Maximal Utility without Risk Deterioration (MURD). An admissible heuristic is proposed to accelerate the
search. The proposed method is evaluated using a real network with
simulated deployment of IoT devices. The results demonstrate (1)
the contribution of the augmented attack graphs to quantifying the
impact of IoT devices deployed within the organization on security,
and (2) the effectiveness of the optimized IoT deployment.

Link

CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning

Yisroel Mirsky, Tom Mahler, Ilan Shelef, Yuval Elovici

Department of Information Systems Engineering, Ben-Gurion University, Israel Soroka University Medical Center. 3 Apr 2019

CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning

Yisroel Mirsky, Tom Mahler, Ilan Shelef, Yuval Elovici

Department of Information Systems Engineering, Ben-Gurion University, Israel Soroka University Medical Center. 3 Apr 2019

In 2018, clinics and hospitals were hit with numerous attacks
leading to significant data breaches and interruptions in
medical services. An attacker with access to medical records
can do much more than hold the data for ransom or sell it on
the black market.
In this paper, we show how an attacker can use deeplearning to add or remove evidence of medical conditions
from volumetric (3D) medical scans. An attacker may perform
this act in order to stop a political candidate, sabotage research,
commit insurance fraud, perform an act of terrorism, or
even commit murder. We implement the attack using a 3D
conditional GAN and show how the framework (CT-GAN)
can be automated. Although the body is complex and 3D
medical scans are very large, CT-GAN achieves realistic
results which can be executed in milliseconds.
To evaluate the attack, we focused on injecting and
removing lung cancer from CT scans. We show how three
expert radiologists and a state-of-the-art deep learning AI are
highly susceptible to the attack. We also explore the attack
surface of a modern radiology network and demonstrate one
attack vector: we intercepted and manipulated CT scans in an
active hospital network with a covert penetration test.

Link

Analysis of Location Data Leakage in the Internet Traffic of Android-based Mobile Devices

Nir Sivan, Ron Bitton, Asaf Shabtai

Department of Software and Information Systems Engineering Ben-Gurion University of the Negev. 12 Dec 2018

Analysis of Location Data Leakage in the Internet Traffic of Android-based Mobile Devices

Nir Sivan, Ron Bitton, Asaf Shabtai

Department of Software and Information Systems Engineering Ben-Gurion University of the Negev. 12 Dec 2018

In recent years we have witnessed a shift towards personalized, context-based applications and services for mobile device users. A key component of many of these services is the ability to infer the current location and predict the future location of users based on location sensors embedded in the devices. Such knowledge enables service providers to present relevant and timely offers to their users and better manage traffic congestion control, thus increasing customer satisfaction and engagement. However, such services suffer from location data leakage which has become one of today’s most concerning privacy issues for smartphone users.

BGU researchers focused specifically on location data that is exposed by Android applications via Internet network traffic in plaintext (i.e., without encryption) without the user’s awareness. An empirical evaluation, involving the network traffic of real mobile device users, aimed at: (1) measuring the extent of location data leakage in the Internet traffic of Android-based smartphone devices; and (2) understanding the value of this data by inferring users’ points of interests (POIs).

The key findings of this research center on the extent of this phenomenon in terms of both ubiquity and severity.

Link

Photo Gallery