CBG in the News

An Optical Spy Trick Can Turn Any Shiny Object Into a Bug

Anything from a metallic Rubik’s cube to an aluminum trash can inside a room could give away your private conversations. THE MOST PARANOID among us already know the checklist to avoid modern audio eavesdropping: Sweep your home or office for bugs. Put your phone in a Faraday bag—or a fridge. Consider even stripping internal microphones from your devices. Now one group of researchers offers a surprising addition to that list: Remove every lightweight, metallic object from the room that’s visible from a window. At the Black Hat Asia hacker conference in Singapore this May, researchers from Israel’s Ben G...

Read More ...


Good news, everyone! Security researcher [Mordechai Guri] has given us yet another reason to look askance at our computers and won...

Read More ...

Researchers Defeated Advanced Facial Recognition Tech Using Makeup

A new study used digitally and physically applied makeup to test the limits of state-of-the-art facial recognition software. Resea...

Read More ...

New “Glowworm attack” recovers audio from devices’ power LEDs

A new class of passive TEMPEST attack converts LED output into intelligible audio. Researchers at Ben-Gurion University of the Neg...

Read More ...

U.S.-Israel Energy Center announces winner of $6m funding for cybersecurity solutions in energy sector

The selected consortium, which included a number of American and Israeli companies and universities, will receive a total of $6 mi...

Read More ...

This new cyberattack can dupe DNA scientists into creating dangerous viruses and toxins

The research highlights the potential dangers of new ‘biohacking’ techniques. A new form of cyberattack has been devel...

Read More ...

Split-Second ‘Phantom’ Images Can Fool Tesla’s Autopilot

Researchers found they could stop a Tesla by flashing a few frames of a stop sign for less than half a second on an internet-conne...

Read More ...

BGU researchers find it’s easy to get personal data from Zoom screenshots

Scientists warn that users should not post screen images of their video conference sessions on social media Personal data can easi...

Read More ...

Spies Can Eavesdrop by Watching a Light Bulb’s Vibrations

The so-called lamphone technique allows for real-time listening in on a room that’s hundreds of feet away.  THE LIST OF&nbs...

Read More ...

New Malware Jumps Air-Gapped Devices by Turning Power-Supplies into Speakers

Cybersecurity researcher Mordechai Guri from Israel’s Ben Gurion University of the Negev recently demonstrated a new kind of...

Read More ...

Israel Warns: Hackers Can Threaten Getting CoronaVirus Under Control

For they assailed you by the trickery they practiced against you because of the affair of Peor and because of the affair of thei...

Read More ...

Exfiltrating Data from Air-Gapped Computers Using Screen Brightness

It may sound creepy and unreal, but hackers can also exfiltrate sensitive data from your computer by simply changing the brightnes...

Read More ...
Load more

Anything from a metallic Rubik’s cube to an aluminum trash can inside a room could give away your private conversations.

THE MOST PARANOID among us already know the checklist to avoid modern audio eavesdropping: Sweep your home or office for bugs. Put your phone in a Faraday bag—or a fridge. Consider even stripping internal microphones from your devices. Now one group of researchers offers a surprising addition to that list: Remove every lightweight, metallic object from the room that’s visible from a window.

At the Black Hat Asia hacker conference in Singapore this May, researchers from Israel’s Ben Gurion University of the Negev plan to present a new surveillance technique designed to allow anyone with off-the-shelf equipment to eavesdrop on conversations if they can merely find a line of sight through a window to any of a wide variety of reflective objects in a given room. By pointing an optical sensor attached to a telescope at one of those shiny objects—the researchers tested their technique with everything from an aluminum trash can to a metallic Rubik’s cube—they could detect visible vibrations on an object’s surface that allowed them to derive sounds and thus listen to speech inside the room. Unlike older experiments that similarly watched for minute vibrations to remotely listen in on a target, this new technique let researchers pick up lower-volume conversations, works with a far greater range of objects, and enables real-time snooping rather than after-the-fact reconstruction of a room’s audio.

“We can recover speech from lightweight, shiny objects placed in proximity to an individual who is speaking by analyzing the light reflected from them,” says Ben Nassi, the Ben Gurion professor who carried out the research along with Ras Swissa, Boris Zadov, and Yuval Elovici. “And the beauty of it is that we can do it in real time, which for espionage allows you to act on the information revealed in the content of the conversation.”

The researchers’ trick takes advantage of the fact that sound waves from speech create changes in air pressure that can imperceptibly vibrate objects in a room. In their experimental setup, they attached a photodiode, a sensor that converts light into voltage, to a telescope; the longer-range its lenses and the more light they allow to hit the sensor, the better. That photodiode was then connected to an analog-to-digital converter and a standard PC, which translated the sensor’s voltage output to data that represents the real-time fluctuations of the light reflecting from whatever object the telescope points at. The researchers could then correlate those tiny light changes to the object’s vibration in a room where someone is speaking, allowing them to reconstruct the nearby person’s speech.

The researchers showed that in some cases, using a high-end analog-to-digital converter, they could recover audible speech with their technique when a speaker is about 10 inches from a shiny metallic Rubik’s cube and speaking at 75 decibels, the volume of a loud conversation. With a powerful enough telescope, their method worked from a range of as much as 115 feet. Aside from the Rubik’s cube, they tested the trick with half a dozen objects: a silvery bird figurine, a small polished metal trash can, a less-shiny aluminum ice-coffee can, an aluminum smartphone standard, and even thin metal venetian blinds.

The recovered sound was clearest when using objects like the smartphone stand or trash can, and least clear with the venetian blinds—but still audible to make out every word in some cases. Nassi points out that the ability to capture sounds from window coverings is particularly ironic. “This is an object designed to increase privacy in a room,” Nassi says. “However, if you’re close enough to the venetian blinds, they can be exploited as diaphragms, and we can recover sound from them.”

The Ben Gurion researchers named their technique the Little Seal Bug in an homage to a notorious Cold War espionage incident known as the Great Seal Bug: In 1945, the USSR gave a gift of a wooden seal placard displaying the US coat of arms to the embassy in Moscow, which was discovered years later to contain an RFID spy bug that was undetectable to bug sweepers of that time. Nassi suggests that the Little Seal Bug technique could similarly work when a spy sends a seemingly innocuous gift of a metallic trophy or figurine to someone, which the eavesdropper can then exploit as an ultra-stealthy listening device. But Nassi argues it’s just as likely that a target has a suitable lightweight shiny object on their desk already, in view of a window and any optical snooper.

Nassi’s team isn’t the first to suggest that long-range, optical spying can pick up vocal conversations. In 2014, a team of MIT, Adobe, and Microsoft researchers created what they called the Visual Microphone, showing it was possible to analyze a video of a houseplant’s leaves or an empty potato chip bag inside a room to similarly detect vibrations and reconstruct sound. But Nassi says his researchers’ work can pick up lower-volume sounds and requires far less processing than the video analysis used by the Visual Microphone team. The Ben Gurion team found that using a photodiode was more effective and more efficient than using a camera, allowing easier long-range listening with a new range of objects and offering real-time results.

“This definitely takes a step toward something that’s more useful for espionage,” says Abe Davis, one of the former MIT researchers who worked on the Visual Microphone and is now at Cornell. He says he has always suspected that using a different sort of camera, purpose-built for this sort of optical eavesdropping, would advance the technique. “It’s like we invented the shotgun, and this work is like, ‘We improve on the shotgun, we give you a rifle,'” Davis says.

It’s still far from clear how practical the method will be in a real-world setting, says Thomas Ristenpart, another Cornell computer scientist who has long studied side-channel attacks—techniques like the Little Seal Bug that can extract secrets from unexpected side effects of communications. He points out that even the 75-decibel words the Israeli researchers detected in their tests would be relatively loud, and background noise from an air conditioner, music, or other speakers in the room might interfere with the technique. “But as a proof of concept, it’s still interesting,” Ristenpart says.

Ben Gurion’s Ben Nassi argues, though, that the technique has proven to work well enough that an intelligence agency with a budget in the millions of dollars rather than mere thousands his team spent could likely hone their spy method into a practical and powerful tool. In fact, he says, they may have already. “This is something that could have been exploited many years ago—and probably was exploited for many years,” says Nassi. “The things we’re revealing to the public probably have already been used by clandestine agencies for a long time.”

All of which means that anyone with secrets to keep would be wise to sweep their desk for shiny objects that might serve as inadvertent spy bugs. Or lower the window shades—just not the venetian blinds.

Source: WIRED

Good news, everyone! Security researcher [Mordechai Guri] has given us yet another reason to look askance at our computers and wonder who might be sniffing in our private doings.

This time, your suspicious gaze will settle on the lowly Ethernet cable, which he has used to exfiltrate data across an air gap. The exploit requires almost nothing in the way of fancy hardware — he used both an RTL-SDR dongle and a HackRF to receive the exfiltrated data, and didn’t exactly splurge on the receiving antenna, which was just a random chunk of wire. The attack, dubbed “LANtenna”, does require some software running on the target machine, which modulates the desired data and transmits it over the Ethernet cable using one of two methods: by toggling the speed of the network connection, or by sending raw UDP packets. Either way, an RF signal is radiated by the Ethernet cable, which was easily received and decoded over a distance of at least two meters. The bit rate is low — only a few bits per second — but that may be all a malicious actor needs to achieve their goal.

To be sure, this exploit is quite contrived, and fairly optimized for demonstration purposes. But it’s a pretty effective demonstration, but along with the previously demonstrated hard drive activity lightspower supply fans, and even networked security cameras, it adds another seemingly innocuous element to the list of potential vectors for side-channel attacks.

[via The Register]

Source: hackaday.com

A new study used digitally and physically applied makeup to test the limits of state-of-the-art facial recognition software.


Researchers have found a new and surprisingly simple method for bypassing facial recognition software using makeup patterns. 

new study from Ben-Gurion University of the Negev found that software-generated makeup patterns can be used to consistently bypass state-of-the-art facial recognition software, with digitally and physically-applied makeup fooling some systems with a success rate as high as 98 percent.

In their experiment, the researchers defined their 20 participants as blacklisted individuals so their identification would be flagged by the system. They then used a selfie app called YouCam Makeup to digitally apply makeup to the facial images according to the heatmap which targets the most identifiable regions of the face.. A makeup artist then emulated the digital makeup onto the participants using natural-looking makeup in order to test the target model’s ability to identify them in a realistic situation.

“​​I was surprised by the results of this study,” Nitzan Guettan, a doctoral student and lead author of the study, told Motherboard. “[The makeup artist] didn’t do too much tricks, just see the makeup in the image and then she tried to copy it into the physical world. It’s not a perfect copy there. There are differences but it still worked.”

The researchers tested the attack method in a simulated real-world scenario in which participants wearing the makeup walked through a hallway to see whether they would be detected by a facial recognition system. The hallway was equipped with two live cameras that streamed to the MTCNN face detector while evaluating the system’s ability to identify the participant.

“Our attacker assumes a black-box scenario, meaning that the attacker cannot access the target FR model, its architecture, or any of its parameters,” the paper explains. “Therefore, [the] attacker’s only option is to alter his/her face before being captured by the cameras that feeds the input to the target FR model.” 

The experiment saw 100 percent success in the digital experiments on both the FaceNet model and the LResNet model, according to the paper. In the physical experiments, the participants were detected in 47.6 percent of the frames if they weren’t wearing any makeup and 33.7 percent of the frames if they wore randomly applied makeup. Using the researchers’ method of applying makeup to the highly identifiable parts of the attacker’s face, they were only recognized in 1.2 percent of the frames. 

The researchers are not the first to demonstrate how makeup can be used to fool facial recognition systems. In 2010, artist Adam Harvey’s CV Dazzle project presented a host of makeup looks designed to thwart algorithms, inspired by “dazzle” camouflage used by naval vessels in World War I.

Various studies have shown how facial recognition systems can be bypassed digitally, such as by creating “master faces” that could impersonate others. The paper references a study where a printable sticker was attached to a hat to bypass the facial recognition system, and another where eyeglass frames were printed

While all of these methods might hide someone from facial recognition algorithms, they have the side effect of making you very visible to other humans—especially if attempted somewhere with high security, like an airport.

In the researchers’ experiment, they addressed this by having the makeup artist only use conventional makeup techniques and neutral color palettes to achieve a natural look. Considering its success in the study, the researchers say this method could technically be replicated by anyone using store bought makeup. 

Perhaps unsurprisingly, Guettan says she generally does not trust facial recognition technology in its current state. “I don’t even use it on my iPhone,” she told Motherboard. “There are a lot of problems with this domain of facial recognition. But I think the technology is becoming better and better.”

Source: vice.com

A new class of passive TEMPEST attack converts LED output into intelligible audio.

Researchers at Ben-Gurion University of the Negev have demonstrated a novel way to spy on electronic conversations. A new paper released today outlines a novel passive form of the TEMPEST attack called Glowworm, which converts minute fluctuations in the intensity of power LEDs on speakers and USB hubs back into the audio signals that caused those fluctuations.

The Cyber@BGU team—consisting of Ben Nassi, Yaron Pirutin, Tomer Gator, Boris Zadov, and Professor Yuval Elovici—analyzed a broad array of widely used consumer devices including smart speakers, simple PC speakers, and USB hubs. The team found that the devices’ power indicator LEDs were generally influenced perceptibly by audio signals fed through the attached speakers.

Although the fluctuations in LED signal strength generally aren’t perceptible to the naked eye, they’re strong enough to be read with a photodiode coupled to a simple optical telescope. The slight flickering of power LED output due to changes in voltage as the speakers consume electrical current are converted into an electrical signal by the photodiode; the electrical signal can then be run through a simple Analog/Digital Converter (ADC) and played back directly.

A novel passive approach

With sufficient knowledge of electronics, the idea that a device’s supposedly solidly lit LEDs will “leak” information about what it’s doing is straightforward. But to the best of our knowledge, the Cyber@BGU team is the first to both publish the idea and prove that it works empirically.

The strongest features of the Glowworm attack are its novelty and its passivity. Since the approach requires absolutely no active signaling, it would be immune to any sort of electronic countermeasure sweep. And for the moment, a potential target seems unlikely to either expect or deliberately defend against Glowworm—although that might change once the team’s paper is presented later this year at the CCS 21 security conference.

The attack’s complete passivity distinguishes it from similar approaches—a laser microphone can pick up audio from the vibrations on a window pane. But defenders can potentially spot the attack using smoke or vapor—particularly if they know the likely frequency ranges an attacker might use.

Glowworm requires no unexpected signal leakage or intrusion even while actively in use, unlike “The Thing.” The Thing was a Soviet gift to the US Ambassador in Moscow, which both required “illumination” and broadcast a clear signal while illuminated. It was a carved wooden copy of the US Great Seal, and it contained a resonator that, if lit up with a radio signal at a certain frequency (“illuminating” it), would then broadcast a clear audio signal via radio. The actual device was completely passive; it worked a lot like modern RFID chips (the things that squawk when you leave the electronics store with purchases the clerk forgot to mark as purchased).

Accidental defense

Despite Glowworm’s ability to spy on targets without revealing itself, it’s not something most people will need to worry much about. Unlike the listening devices we mentioned in the section above, Glowworm doesn’t interact with actual audio at all—only with a side effect of electronic devices that produce audio.

This means that, for example, a Glowworm attack used successfully to spy on a conference call would not capture the audio of those actually in the room—only of the remote participants whose voices are played over the conference room audio system.

The need for a clean line of sight is another issue that means that most targets will be defended from Glowworm entirely by accident. Getting a clean line of sight to a windowpane for a laser microphone is one thing—but getting a clean line of sight to the power LEDs on a computer speaker is another entirely.

Humans generally prefer to face windows themselves for the view and have the LEDs on devices face them. This leaves the LEDs obscured from a potential Glowworm attack. Defenses against simple lip-reading—like curtains or drapes—are also effective hedges against Glowworm, even if the targets don’t actually know Glowworm might be a problem.

Finally, there’s currently no real risk of a Glowworm “replay” attack using video that includes shots of vulnerable LEDs. A close-range, 4k at 60 fps video might just barely capture the drop in a dubstep banger—but it won’t usefully recover human speech, which centers between 85Hz-255Hz for vowel sounds and 2KHz-4KHz for consonants.

Turning out the lights

Although Glowworm is practically limited by its need for clear line of sight to the LEDs, it works at significant distance. The researchers recovered intelligible audio at 35 meters—and in the case of adjoining office buildings with mostly glass facades, it would be quite difficult to detect.

For potential targets, the simplest fix is very simple indeed—just make sure that none of your devices has a window-facing LED. Particularly paranoid defenders can also mitigate the attack by placing opaque tape over any LED indicators that might be influenced by audio playback.

On the manufacturer’s side, defeating Glowworm leakage would also be relatively uncomplicated—rather than directly coupling a device’s LEDs to the power line, the LED might be coupled via an opamp or GPIO port of an integrated microcontroller. Alternatively (and perhaps more cheaply), relatively low-powered devices could damp power supply fluctuations by connecting a capacitor in parallel to the LED, acting as a low-pass filter.

For those interested in further details of both Glowworm and its effective mitigation, we recommend visiting the researchers’ website, which includes a link to the full 16-page white paper.

Source: Ars Technica

The selected consortium, which included a number of American and Israeli companies and universities, will receive a total of $6 million, taking Center’s total investment to $67.2 million

The U.S. Department of Energy (DOE) along with its Israeli counterpart, the Ministry of Energy and the Israel Innovation Authority announced on Tuesday the winner of a government-funding award amounting to $6 million in funding on behalf of the U.S.-Israel Energy Center for ensuring cybersecurity of energy infrastructure.

The award follows the selection in March of last year for similar grants in the fields of energy storage, fossil energy, and energy-water conservation. The total value of investments could reach up to $12 million over a period of three years, and the total value of all four programs could reach up to $67.2 million.

The project will work toward ensuring critical energy assets and infrastructure

The selected consortium was led by Arizona State University and Ben-Gurion University who will perform research and development. Their project was entitled, “Comprehensive Cybersecurity Technology for Critical Power Infrastructure AI Based Centralized Defense and Edge Resilience,” and includes the following partners: the Georgia Tech Research Corporation, Nexant, Delek US Holdings Inc., Duquesne Light Company, Schweitzer Engineering Laboratories, the MITRE Corporation, Arizona Public Service, OTORIO, Rad Data Communication, SIGA OT Solutions, and Arava Power.

The U.S.-Israel Energy Center of Excellence in Energy, Engineering and Water Technology was initially authorized by Congress as part of the U.S.-Israel Strategic Partnership Act of 2014 and has been funded by the Israeli government since 2016. Total government funding is expected to total $40 million for the next five years to promote energy security and economic development through research and development of innovative energy technologies, while facilitating cooperation collaborations between both countries’ companies, research institutes, and universities. The Energy Center is managed by the BIRD Foundation.“Cybersecurity for energy infrastructure is key to deploying new innovative technologies to combat the climate crisis, promote energy justice, and create new clean energy jobs. It is vital that we ensure the security and reliability of critical energy infrastructure, as well as protecting U.S. assets. I am pleased that this international consortium will develop new tools to address the cybersecurity threats we will face as we invest in our people, supply chains, and the capacity to meet our clean energy goals,” said Dr. Andrew Light, who serves as Assistant Secretary for International Affairs (Acting) at the U.S. Department of Energy.“The Ministry of Energy is strongly involved in protecting the water and energy sector from cyberattacks, and believes that investing in research and development is just as important,” said

Udi Adiri, who serves as the Director-General at the Israel Ministry of Energy.Dr. Ami Appelbaum, Chairman of the Israel Innovation Authority and Chief Scientist at the Ministry of Economy and Industry also commented: “In an age where technological innovations are multiplying exponentially, the risks of cyberattacks are also increasing significantly, especially in critical facilities such as energy infrastructure. We are pleased to see the high level of engagement in both countries, and look forward to the amazing changes they will bring about to ensure the security of the energy sector and the population worldwide.”

Source: CTECH

The research highlights the potential dangers of new ‘biohacking’ techniques.

A new form of cyberattack has been developed which highlights the potential future ramifications of digital assaults against the biological research sector.

On Monday, academics from the Ben-Gurion University of the Negev described how “unwitting” biologists and scientists could become victims of cyberattacks designed to take biological warfare to another level. 

At a time where scientists worldwide are pushing ahead with the development of potential vaccines to combat the COVID-19 pandemic, Ben-Gurion’s team says that it is no longer the case that a threat actor needs physical access to a “dangerous” substance to produce or deliver it — instead, scientists could be duped into producing toxins or synthetic viruses on their behalf through targeted cyberattacks. 

The research, “Cyberbiosecurity: Remote DNA Injection Threat in Synthetic Biology,” has been recently published in the academic journal Nature Biotechnology.

The attack documents how malware, used to infiltrate a biologist’s computer, could replace sub-strings in DNA sequencing. Specifically, weaknesses in the Screening Framework Guidance for Providers of Synthetic Double-Stranded DNA and Harmonized Screening Protocol v2.0 systems “enable protocols to be circumvented using a generic obfuscation procedure.”

When DNA orders are made to synthetic gene providers, US Department of Health and Human Services (HHS) guidance requires screening protocols to be in place to scan for potentially harmful DNA. 

However, it was possible for the team to circumvent these protocols through obfuscation, in which 16 out of 50 obfuscated DNA samples were not detected against ‘best match’ DNA screening. 

Software used to design and manage synthetic DNA projects may also be susceptible to man in-the-browser attacks that can be used to inject arbitrary DNA strings into genetic orders, facilitating what the team calls an “end-to-end cyberbiological attack.”

The synthetic gene engineering pipeline offered by these systems can be tampered with in browser-based attacks. Remote hackers could use malicious browser plugins, for example, to “inject obfuscated pathogenic DNA into an online order of synthetic genes.”

In a case demonstrating the possibilities of this attack, the team cited residue Cas9 protein, using malware to transform this sequence into active pathogens. Cas9 protein, when using CRISPR protocols, can be exploited to “deobfuscate malicious DNA within the host cells,” according to the team.

For an unwitting scientist processing the sequence, this could mean the accidental creation of dangerous substances, including synthetic viruses or toxic material. 

“To regulate both intentional and unintentional generation of dangerous substances, most synthetic gene providers screen DNA orders which is currently the most effective line of defense against such attacks,” commented Rami Puzis, head of the BGU Complex Networks Analysis Lab. “Unfortunately, the screening guidelines have not been adapted to reflect recent developments in synthetic biology and cyberwarfare.”

A potential attack chain is outlined below:

“This attack scenario underscores the need to harden the synthetic DNA supply chain with protections against cyber-biological threats,” Puzis added. “To address these threats, we propose an improved screening algorithm that takes into account in vivo gene editing.”

Source: ZDNet

Researchers found they could stop a Tesla by flashing a few frames of a stop sign for less than half a second on an internet-connected billboard.

SAFETY CONCERNS OVER automated driver-assistance systems like Tesla’s usually focus on what the car can’t see, like the white side of a truck that one Tesla confused with a bright sky in 2016, leading to the death of a driver. But one group of researchers has been focused on what autonomous driving systems might see that a human driver doesn’t—including “phantom” objects and signs that aren’t really there, which could wreak havoc on the road.

Researchers at Israel’s Ben Gurion University of the Negev have spent the last two years experimenting with those “phantom” images to trick semi-autonomous driving systems. They previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.

“The attacker just shines an image of something on the road or injects a few frames into a digital billboard, and the car will apply the brakes or possibly swerve, and that’s dangerous,” says Yisroel Mirsky, a researcher for Ben Gurion University and Georgia Tech who worked on the research, which will be presented next month at the ACM Computer and Communications Security conference. “The driver won’t even notice at all. So somebody’s car will just react, and they won’t understand why.”

In their first round of research, published earlier this year, the team projected images of human figures onto a road, as well as road signs onto trees and other surfaces. They found that at night, when the projections were visible, they could fool both a Tesla Model X running the HW2.5 Autopilot driver-assistance system—the most recent version available at the time, now the second-most-recent —and a Mobileye 630 device. They managed to make a Tesla stop for a phantom pedestrian that appeared for a fraction of a second, and tricked the Mobileye device into communicating the incorrect speed limit to the driver with a projected road sign.

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video.

The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions. And while they tested their technique on a TV-sized billboard screen on a small road, they say it could easily be adapted to a digital highway billboard, where it could cause much more widespread mayhem.

The Ben Gurion researchers are far from the first to demonstrate methods of spoofing inputs to a Tesla’s sensors. As early as 2016, one team of Chinese researchers demonstrated they could spoof and even hide objects from Tesla’s sensors using radio, sonic, and light-emitting equipment. More recently, another Chinese team found they could exploit Tesla’s lane-follow technology to trick a Tesla into changing lanes just by planting cheap stickers on a road.

“Somebody’s car will just react, and they won’t understand why.”


But the Ben Gurion researchers point out that unlike those earlier methods, their projections and hacked billboard tricks don’t leave behind physical evidence. Breaking into a billboard in particular can be performed remotely, as plenty of hackers have previously demonstrated. The team speculates that the phantom attacks could be carried out as an extortion technique, as an act of terrorism, or for pure mischief. “Previous methods leave forensic evidence and require complicated preparation,” says Ben Gurion researcher Ben Nassi. “Phantom attacks can be done purely remotely, and they do not require any special expertise.”

Neither Mobileye nor Tesla responded to WIRED’s request for comment. But in an email to the researchers themselves last week, Tesla made a familiar argument that its Autopilot feature isn’t meant to be a fully autonomous driving system. “Autopilot is a driver assistance feature that is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time,” reads Tesla’s response. The Ben Gurion researchers counter that Autopilot is used very differently in practice. “As we know, people use this feature as an autopilot and do not keep 100 percent attention on the road while using it,” writes Mirsky in an email. “Therefore, we must try to mitigate this threat to keep people safe, regardless of [Tesla’s] warnings.”

Tesla does have a point, though not one that offers much consolation to its own drivers. Tesla’s Autopilot system depends largely on cameras and, to a lesser extent, radar, while more truly autonomous vehicles like those developed by Waymo, Uber, or GM-owned autonomous vehicle startup Cruise also integrate laser-based lidar, points out Charlie Miller, the lead autonomous vehicle security architect at Cruise. “Lidar would not have been susceptible to this type of attack,” says Miller. “You can change an image on a billboard and lidar doesn’t care, it’s measuring distance and velocity information. So these attacks wouldn’t have worked on most of the truly autonomous cars out there.”

The Ben Gurion researchers didn’t test their attacks against those other, more multi-sensor setups. But they did demonstrate ways to detect the phantoms they created even on a camera-based platform. They developed a system they call “Ghostbusters” that’s designed to take into account a collection of factors like depth, light, and the context around a perceived traffic sign, then weigh all those factors before deciding whether a road sign image is real. “It’s like a committee of experts getting together and deciding based on very different perspectives what this image is, whether it’s real or fake, and then making a collective decision,” says Mirsky. The result, the researchers say, could far more reliably defeat their phantom attacks, without perceptibly slowing down a camera-based autonomous driving system’s reactions.

Ben Gurion’s Nassi concedes that the Ghostbuster system isn’t perfect, and he argues that their phantom research shows the inherent difficulty in making autonomous driving decisions even with multiple sensors like a Tesla’s combined radar and camera. Tesla, he says, has taken a “better safe than sorry” approach that trusts the camera alone if it shows an obstacle or road sign ahead, leaving it vulnerable to their phantom attacks. But an alternative might disregard hazards if one or more of a vehicle’s sensors misses them. “If you implement a system that ignores phantoms if they’re not validated by other sensors, you will probably have some accidents,” says Nassi. “Mitigating phantoms comes with a price.”

Cruise’s Charlie Miller, who previously worked on autonomous vehicle security at Uber and Chinese self-driving car firm Didi Chuxing, counters that truly autonomous, lidar-enabled vehicles have in fact managed to solve that problem. “Attacks against sensor systems are interesting, but this isn’t a serious attack against the systems I’m familiar with,” such as Uber and Cruise vehicles, Miller says. But he still sees value in Ben Gurion’s work. “It’s something we need to think about and work on and plan for. These cars rely on their sensor inputs, and we need to make sure they’re trusted.”

Source: Wired.com

Scientists warn that users should not post screen images of their video conference sessions on social media

Members of the city commission to prevent the spread of coronavirus disease (COVID-19) vote during a meeting via Zoom video link in Lviv, Ukraine March 26, 2020.
(photo credit: REUTERS/ROMAN BALUK)

Personal data can easily be extracted from Zoom and other video conference applications, researchers from Ben Gurion University of the Negev announced today.

The Israeli researchers examined Zoom, Microsoft Teams and Google Meet and warned that users should not post screen images of their video conference sessions on social media as it was easy to identify people from these shots.

“The findings in our paper indicate that it is relatively easy to collect thousands of publicly available images of video conference meetings and extract personal information about the participants, including their face images, age, gender, and full names,” said Dr. Michael Fire, BGU Department of Software and Information Systems Engineering (SISE).

He added that “This type of extracted data can vastly and easily jeopardize people’s security and privacy, affecting adults as well as young children and the elderly.”

The researchers found that it is possible to extract private information from collage images of meeting participants posted on Instagram and Twitter. They used image processing text recognition tools as well as social network analysis to explore the dataset of more than 15,700 collage images and more than 142,000 face images of meeting participants.

Artificial intelligence-based image-processing algorithms helped identify the same individual’s participation at different meetings by simply using either face recognition or other extracted user features like the image background.

The researchers were able to spot faces 80% of the time as well as detect gender and estimate age. Free web-based text recognition libraries allowed the BGU researchers to correctly determine nearly two-thirds of usernames from screenshots.

The researchers identified 1,153 people who likely appeared in more than one meeting, as well as networks of Zoom users in which all the participants were coworkers.“

This proves that the privacy and security of individuals and companies are at risk from data exposed on video conference meetings,” according to the research team which also includes BGU SISE researchers Dima Kagan and Dr. Galit Fuhrmann Alpert.

Additionally, the researchers offered a number of recommendations on how to prevent privacy and security attacks. These include not posting video conference images online, or sharing videos; using generic pseudonyms like “iZoom” or “iPhone” rather than a unique username or real name; and using a virtual background vs. a real background since it can help fingerprint a user account across several meetings.

Additionally, the team advised video conferencing operators to augment their platforms with a privacy mode such as filters or Gaussian noise to an image, which can disrupt facial recognition while keeping the face still recognizable.

“Since organizations are relying on video conferencing to enable their employees to work from home and conduct meetings, they need to better educate and monitor a new set of security and privacy threats,” Fire added. “Parents and children of the elderly also need to be vigilant, as video conferencing is no different than other online activity.”

Source: Jerusalem Post

The so-called lamphone technique allows for real-time listening in on a room that’s hundreds of feet away. 

THE LIST OF sophisticated eavesdropping techniques has grown steadily over years: wiretaps, hacked phones, bugs in the wall—even bouncing lasers off of a building’s glass to pick up conversations inside. Now add another tool for audio spies: Any light bulb in a room that might be visible from a window.

Researchers from Israeli’s Ben-Gurion University of the Negev and the Weizmann Institute of Science today revealed a new technique for long-distance eavesdropping they call “lamphone.” They say it allows anyone with a laptop and less than a thousand dollars of equipment—just a telescope and a $400 electro-optical sensor—to listen in on any sounds in a room that’s hundreds of feet away in real-time, simply by observing the minuscule vibrations those sounds create on the glass surface of a light bulb inside. By measuring the tiny changes in light output from the bulb that those vibrations cause, the researchers show that a spy can pick up sound clearly enough to discern the contents of conversations or even recognize a piece of music.

“Any sound in the room can be recovered from the room with no requirement to hack anything and no device in the room,” says Ben Nassi, a security researcher at Ben-Gurion who developed the technique with fellow researchers Yaron Pirutin and Boris Zadov, and who plans to present their findings at the Black Hat security conference in August. “You just need line of sight to a hanging bulb, and this is it.”

In their experiments, the researchers placed a series of telescopes around 80 feet away from a target office’s light bulb, and put each telescope’s eyepiece in front of a Thorlabs PDA100A2 electro-optical sensor. They then used an analog-to-digital converter to convert the electrical signals from that sensor to digital information. While they played music and speech recordings in the faraway room, they fed the information picked up by their set-up to a laptop, which analyzed the readings.

The researchers’ experimental setup, with an electro-optical sensor behind the eyepiece of a telescope, pointing at a lightbulb inside an office building more than 80 feet away.

The researchers found that the tiny vibrations of the light bulb in response to sound—movements that they measured at as little as a few hundred microns—registered as a measurable changes in the light their sensor picked up through each telescope. After processing the signal through software to filter out noise, they were able to reconstruct recordings of the sounds inside the room with remarkable fidelity: They showed, for instance, that they could reproduce an audible snippet of a speech from President Donald Trump well enough for it to be transcribed by Google’s Cloud Speech API. They also generated a recording of the Beatles’ “Let It Be” clear enough that the name-that-tune app Shazam could instantly recognize it.Most Popular

The technique nonetheless has some limitations. In their tests, the researchers used a hanging bulb, and it’s not clear if a bulb mounted in a fixed lamp or a ceiling fixture would vibrate enough to derive the same sort of audio signal. The voice and music recordings they used in their demonstrations were also louder than the average human conversation, with speakers turned to their maximum volume. But the team points out that they also used a relatively cheap electro-optical sensor and analog-to-digital converter, and could have upgraded to a more expensive one to pick up quieter conversations. LED bulbs also offer a signal-to-noise ratio that’s about 6.3 times that of an incandescent bulb and 70 times a fluorescent one.

Regardless of those caveats, Stanford computer scientist and cryptographer Dan Boneh argues that the researchers’ technique still represents a significant and potentially practical new form of what he calls a “side channel” attack—one that takes advantage of unintended leakage of information to steal secrets. “It’s a beautiful application of side channels,” Boneh says. “Even if this requires a hanging bulb and high decibels, it’s still super interesting. And it’s still just the first time this has been shown to be possible. Attacks only get better, and future research will only improve this over time.”

“You just need line of sight to a hanging bulb.”


The research team, which was advised by BGU’s Yuval Elovici and Adi Shamir, the coinventor of the ubiquitous RSA encryption system, isn’t the first to show that unexpected sonic phenomena can enable eavesdropping. Researchers have known for years that a laser bounced off a target’s window can allow spies to pick up the sounds inside. Another group of researchers showed in 2014 that the gyroscope of a compromised smartphone can pick up sounds even if the malware can’t access its microphone. The closest previous technique to lamphone is what MIT, Microsoft, and Adobe researchers in 2014 called a “visual microphone”: By analyzing video recorded via telescope of an object in a room that picks up vibrations—a bag of potato chips or a houseplant, for instance—those researchers were able to reconstruct speech and music.

But Nassi points out that the video-based technique, while far more versatile since it doesn’t require a bulb to be visible in the room, requires analysis of the video with software after it’s recorded to convert the subtle vibrations observed in an object into the sounds it picked up. Lamphone, by contrast, enables real-time spying. Since the vibrating object is itself a light source, the electro-optical sensor can pick up vibrations in far simpler visual data.

That could make lamphone significantly more practical for use in espionage than previous techniques, Nassi argues. “When you actually use it in real time you can respond in real time rather than losing the opportunity,” he says.Most Popular

Still, Nassi says the researchers are publishing their findings not to enable spies or law enforcement, but to make clear to those on both sides of surveillance what’s possible. “We want to raise the awareness of this kind of attack vector,” he says. “We’re not in the game of providing tools.”

As unlikely as being targeted by this technique is, it’s also easy to forestall. Just cover any hanging bulbs, or better yet, close the curtains. And if you’re paranoid enough to be concerned about this sort of spy game, hopefully you’ve already used anti-vibration devices on those windows to prevent eavesdropping with a laser microphone. And swept your house for bugs. And removed the microphones from your phone and computer. After all, in an era when even the light bulbs have ears, a paranoiac’s work is never done.

Source: WIRED

Cybersecurity researcher Mordechai Guri from Israel’s Ben Gurion University of the Negev recently demonstrated a new kind of malware that could be used to covertly steal highly sensitive data from air-gapped and audio-gapped systems using a novel acoustic quirk in power supply units that come with modern computing devices.

Dubbed ‘POWER-SUPPLaY,’ the latest research builds on a series of techniques leveraging electromagnetic, acoustic, thermal, optical covert channels, and even power cables to exfiltrate data from non-networked computers.

“Our developed malware can exploit the computer power supply unit (PSU) to play sounds and use it as an out-of-band, secondary speaker with limited capabilities,” Dr. Guri outlined in a paper published today and shared with The Hacker News.

“The malicious code manipulates the internal switching frequency of the power supply and hence controls the sound waveforms generated from its capacitors and transformers.”

“We show that our technique works with various types of systems: PC workstations and servers, as well as embedded systems and IoT devices that have no audio hardware. Binary data can be modulated and transmitted out via the acoustic signals.”

Using Power Supply as an Out-of-Band Speaker

Air-gapped systems are considered a necessity in environments where sensitive data is involved in an attempt to reduce the risk of data leakage. The devices typically have their audio hardware disabled so as to prevent adversaries from leveraging the built-in speakers and microphones to pilfer information via sonic and ultrasonic waves.

It also necessitates that both the transmitting and receiving machines be located in close physical proximity to one another and that they are infected with the appropriate malware to establish the communication link, such as through social engineering campaigns that exploit the target device’s vulnerabilities.

POWER-SUPPLaY functions in the same way in that the malware running on a PC can take advantage of its PSU and use it as an out-of-band speaker, thus obviating the need for specialized audio hardware.

“This technique enables playing audio streams from a computer even when audio hardware is disabled, and speakers are not present,” the researcher said. “Binary data can be modulated and transmitted out via the acoustic signals. The acoustic signals can then be intercepted by a nearby receiver (e.g., a smartphone), which demodulates and decodes the data and sends it to the attacker via the Internet.”

Put differently, the air-gap malware regulates the workload of modern CPUs to control its power consumption and the switching frequency of the PSU to emit an acoustic signal in the range of 0-24kHz and modulate binary data over it.

Air-Gap Bypass and Cross-Device Tracking

The malware in the compromised computer, then, not only amasses sensitive data (files, URLs, keystrokes, encryption keys, etc.), it also transmits data in WAV format using the acoustic sound waves emitted from the computer’s power supply, which is decoded by the receiver — in this case, an app running on an Android smartphone.

According to the researcher, an attacker can exfiltrate data from audio-gapped systems to the nearby phone located 2.5 meters away with a maximal bit rate of 50 bit/sec.

One privacy-breaking consequence of this attack is cross-device tracking, as this technique enables the malware to capture browsing history on the compromised system and broadcast the information to the receiver.

As a countermeasure, the researcher suggest zoning sensitive systems in restricted areas where mobile phones and other electronic equipment are banned. Having an intrusion detection system to monitor suspicious CPU behavior, and setting up hardware-based signal detectors and jammers could also help defend against the proposed covert channel.

With air-gapped nuclear facilities in Iran and India the target of security breaches, the new research is yet another reminder that complex supply chain attacks can be directed against isolated systems.

“The POWER-SUPPLaY code can operate from an ordinary user-mode process and doesn’t need hardware access or root-privileges,” the researcher concluded. “This proposed method doesn’t invoke special system calls or access hardware resources, and hence is highly evasive.”

Source: The Hacker News

For they assailed you by the trickery they practiced against you because of the affair of Peor and because of the affair of their kinswoman Cozbi, daughter of the Midianite chieftain, who was killed at the time of the plague on account of Peor.” 

Numbers 25:18

The new coronavirus, Covid-2019 – that has infected more than 80,000 people in China and other countries is harmful not only because it causes potentially fatal disease. As advances in microbiology bring whole genome sequencing of infectious agents to the forefront of disease diagnostics, new cybersecurity risks for public health have materialized.

Next-generation sequencing (NGS) is a new method to identify and characterize pathogens quickly, leading to faster treatment. As DNA sequencing has become cheaper, the next step is to move from the lab into the field and – in the future – even into homes. 

Researchers at Ben-Gurion University of the Negev in Beersheba argue in a first-of-its-kind policy paper published in the open-access journal Eurosurveillance that cyber-attacks on NGS-based public health surveillance systems could have a harmful effect such as false detection of significant public health threats. They could also lead to delayed recognition of epidemics. Such incidents could have a major regional or global impact and contemporary global health challenges as is evident by the recent natural spread of the new coronavirus.

Such a development exposes microbial test results and DNA sequence banks to potential hackers. Therefore, protection of this data from troublemakers in cyberspace must be built as part and parcel of the products themselves and not tacked on as an afterthought, the Israeli scientists declare.

“Computerized medical equipment is an attractive target for malicious cyber activity, as it is among a rapidly shrinking group of industries which combine mission-critical infrastructure and high value data with relatively weak cybersecurity standards,” the researchers wrote.

The team was headed by Prof. Jacob Moran-Gilad and Dr. Yair Motro  from the the BGU Faculty of Health Science’s department of health systems management, with Prof. Lior Rokach, Dr. Yossi Oren and  student Iliya Fayans of the department of software and information systems engineering in theFaculty of Engineering Sciences.

The researchers suggest a number of different potentially vulnerable points during the process. From sample processing and DNA sequencing to bioinformatics software and sequence-based public health surveillance systems, the whole NGS pipeline is currently vulnerable.

But as some attacks are more likely than others and may have different impacts, the team discuss 12 attacks and ranked them according to three major effects, six moderate effects and three minor ones.

The researchers also offer a series of recommendations as NGS devices and surveillance systems are built. “NGS devices, bioinformatics software and surveillance systems present challenges beyond the normal ones of other information-technology devices. Thus, cyber security must be considered when devices and systems are designed,” concluded Oren, and not just tacked on afterwards as is often the case with other Internet-of-Things devices (networks of Internet connected objects able to collect and exchange data) today.

Source: Breaking Israel News

It may sound creepy and unreal, but hackers can also exfiltrate sensitive data from your computer by simply changing the brightness of the screen, new cybersecurity research shared with The Hacker News revealed.

In recent years, several cybersecurity researchers demonstrated innovative ways to covertly exfiltrate data from a physically isolated air-gapped computer that can’t connect wirelessly or physically with other computers or network devices.

These clever ideas rely on exploiting little-noticed emissions of a computer’s components, such as light, soundheatradio frequencies, or ultrasonic waves, and even using the current fluctuations in the power lines.

For instance, potential attackers could sabotage supply chains to infect an air-gapped computer, but they can’t always count on an insider to unknowingly carry a USB with the data back out of a targeted facility.

When it comes to high-value targets, these unusual techniques, which may sound theoretical and useless to many, could play an important role in exfiltrating sensitive data from an infected but air-gapped computer.

How Does the Brightness Air-Gapped Attack Work?

In his latest research with fellow academics, Mordechai Guri, the head of the cybersecurity research center at Israel’s Ben Gurion University, devised a new covert optical channel using which attackers can steal data from air-gapped computers without requiring network connectivity or physically contacting the devices.

“This covert channel is invisible, and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys, and passwords), and modulate it within the screen brightness, invisible to users,” the researchers said.

The fundamental idea behind encoding and decoding of data is similar to the previous cases, i.e., malware encodes the collected information as a stream of bytes and then modulate it as ‘1’ and ‘0’ signal.

In this case, the attacker uses small changes in the LCD screen brightness, which remains invisible to the naked eye, to covertly modulate binary information in morse-code like patterns

“In LCD screens each pixel presents a combination of RGB colors which produce the required compound color. In the proposed modulation, the RGB color component of each pixel is slightly changed.”

“These changes are invisible, since they are relatively small and occur fast, up to the screen refresh rate. Moreover, the overall color change of the image on the screen is invisible to the user.”

The attacker, on the other hand, can collect this data stream using video recording of the compromised computer’s display, taken by a local surveillance camera, smartphone camera, or a webcam and can then reconstruct exfiltrated information using image processing techniques.

As shown in the video demonstration shared with The Hacker News, researchers infected an air-gapped computer with specialized malware that intercepts the screen buffer to modulate the data in ASK by modifying the brightness of the bitmap according to the current bit (‘1’ or ‘0’).

You can find detailed technical information on this research in the paper [PDF] titled, ‘BRIGHTNESS: Leaking Sensitive Data from Air-Gapped Workstations via Screen Brightness,’ published yesterday by Mordechai Guri, Dima Bykhovsky and Yuval Elovici.

Air-Gapped Popular Data Exfiltration Techniques

It’s not the first time Ben-Gurion researchers came up with a covert technique to target air-gapped computers. Their previous research of hacking air-gap machines include:

  • PowerHammer attack to exfiltrate data from air-gapped computers through power lines.
  • MOSQUITO technique using which two (or more) air-gapped PCs placed in the same room can covertly exchange data via ultrasonic waves.
  • BeatCoin technique that could let attackers steal private encryption keys from air-gapped cryptocurrency wallets.
  • aIR-Jumper attack that takes sensitive information from air-gapped computers with the help of infrared-equipped CCTV cameras that are used for night vision.
  • MAGNETO and ODINI techniques use CPU-generated magnetic fields as a covert channel between air-gapped systems and nearby smartphones.
  • USBee attack that can be used to steal data from air-gapped computers using radio frequency transmissions from USB connectors.
  • DiskFiltration attack that can steal data using sound signals emitted from the hard disk drive (HDD) of the targeted air-gapped computer;
  • BitWhisper that relies on heat exchange between two computer systems to stealthily siphon passwords or security keys;
  • AirHopper that turns a computer’s video card into an FM transmitter to capture keystrokes;
  • Fansmitter technique that uses noise emitted by a computer fan to transmit data; and
  • GSMem attack that relies on cellular frequencies.

Source: The Hacker News