About Us

Welcome to Cyber@BGU, the forefront of cyber security, big data analytics, and AI applied research at Ben Gurion University. Nestled in the heart of Beer Sheva’s Hi-Tech park—Israel’s Cyber Capital—our state-of-the-art R&D center is a beacon for pioneering projects. In collaboration with industrial and governmental allies, we continually strive to drive innovation and tackle the most pressing technological challenges of our era.

Ingenious Hackers Used iPhone 13 To Steal Samsung Galaxy Crypto Key—Here’s How To Stop Them

There’s an old joke that asks, “how many hackers does it take to change a light bulb?” The correct answer is none as nobody knew the light bulb had even been changed. Joking aside, what if a hacker could use a bulb as part of an exploit? I’m not talking about spying on you using a light bulb, or even a device power LED. But what if I told you hackers have already demonstrated how they can actually steal a cryptographic “secret key” by video-recording power LEDs? What if I told you they have already used an iPhone to steal just such a key from a Samsung Galaxy smartphone? Power LEDs As A Hacking Tool The National Security Agenc...

Read More ...

Hackers can steal cryptographic keys by video-recording power LEDs 60 feet away

Key-leaking side channels are a fact of life. Now they can be done by video-recording power LEDs. Enlarge / Left: a smar...

Read More ...

COVID-bit: New COVert Channel to Exfiltrate Data from Air-Gapped Computers

An unconventional data exfiltration method leverages a previously undocumented covert channel to leak sensitive information from a...

Read More ...

ETHERLED: Air-gapped systems leak data via network card LEDs

Israeli researcher Mordechai Guri has discovered a new method to exfiltrate data from air-gapped systems using the LED indicators ...

Read More ...

New Air-Gap Attack Uses SATA Cable as an Antenna to Transfer Radio Signals

A new method devised to leak information and jump over air-gaps takes advantage of Serial Advanced Technology Attachment (SATA) or...

Read More ...

Researchers defeat facial recognition systems with universal face mask

Can attackers create a face mask that would defeat modern facial recognition (FR) systems? A group of researchers from from Ben-Gu...

Read More ...
SIMON MCGILL/GETTY IMAGES

An Optical Spy Trick Can Turn Any Shiny Object Into a Bug

Anything from a metallic Rubik’s cube to an aluminum trash can inside a room could give away your private conversations. THE MOST PARANOID among us already know the checklist to avoid modern audio eavesdropping: Sweep your home or office for bugs. Put your phone in a Faraday bag—or a fridge. Consider even stripping internal microphones from your devices. Now one group of researchers offers a surprising addition to that list: Remove every lightweight, metallic object from the room that’s visible from a window. At the Black Hat Asia hacker conference in Singapore this May, researchers from Israel’s Ben G...

Read More ...

ETHERNET CABLE TURNED INTO ANTENNA TO EXPLOIT AIR-GAPPED COMPUTERS

Good news, everyone! Security researcher [Mordechai Guri] has given us yet another reason to look askance at our computers and wonder who might be sniffing in our private doings. This time, your suspicious gaze will settle on the lowly Ethernet cable, which he has used to exfiltrate data across an air gap. The exploit requires almost nothing in the way of fancy hardware — he used both an RTL-SDR dongle and a HackRF to receive the exfiltrated data, and didn’t exactly splurge on the receiving antenna, which was just a random chunk of wire. The attack, dubbed “LANtenna”, does require some software running on the target machine, whic...

Read More ...

Researchers Defeated Advanced Facial Recognition Tech Using Makeup

A new study used digitally and physically applied makeup to test the limits of state-of-the-art facial recognition software. Resea...

Read More ...

New “Glowworm attack” recovers audio from devices’ power LEDs

A new class of passive TEMPEST attack converts LED output into intelligible audio. Researchers at Ben-Gurion University of the Negev have demonstrated a novel way to spy on electronic conversations. A new paper released today outlines a novel passive form of the TEMPEST attack called Glowworm, which converts minute fluctuations in the intensity of power LEDs on speakers and USB hubs back into the audio signals that caused those fluctuations. The Cyber@BGU team—consisting of Ben Nassi, Yaron Pirutin, Tomer Gator, Boris Zadov, and Professor Yuval Elovici—analyzed a broad array of widely used consumer devices in...

Read More ...

There’s an old joke that asks, “how many hackers does it take to change a light bulb?” The correct answer is none as nobody knew the light bulb had even been changed. Joking aside, what if a hacker could use a bulb as part of an exploit? I’m not talking about spying on you using a light bulb, or even a device power LED. But what if I told you hackers have already demonstrated how they can actually steal a cryptographic “secret key” by video-recording power LEDs? What if I told you they have already used an iPhone to steal just such a key from a Samsung Galaxy smartphone?

Power LEDs As A Hacking Tool

The National Security Agency (NSA) has been well aware of a military spying technique known as TEMPEST, which focuses on leaking emanations such as sounds, vibrations and radio or electrical signals. During the Cold War era, such eavesdropping techniques were employed by way of beaming a laser microphone onto a window to listen to the conversations inside.

Fast-forward to 2021, when security researchers used an analysis of optical emanations from LED power indicators of speakers to recover and record sound. Invisible to the naked eye, the minute fluctuations in intensity of the LED could be read by an electro-optical sensor attached to a telescope.

Roll on another couple of years, and now researchers from the Ben-Gurion University of the Negev in Israel have moved the side-channel attack type forward once more by using the video camera of an iPhone 13 Pro Max to steal a cryptographic key from a Samsung Galaxy S8 smartphone.

iPhone 13 Pro Max To Steal Cryptographic Key

In their research paper entitled Video-Based Cryptanalysis: Extracting Cryptographic Keys from Video Footage of a Device’s Power LED, the researchers put forward video-based cryptanalysis as a “new method used to recover secret keys from a device by analyzing video footage of a device’s power LED.”

The researchers, Ben Nassi, Etay Iluz, Or Cohen, Ofek Vayner, Dudi Nassi, Boris Zadov and Yuval Elovici, were able to demonstrate how secret keys could be harvested from non-compromised devices using video recorded by consumer-grade video cameras such as found in an iPhone 13 Pro Max. Video, that is, of device power LEDs.

This is explained in the paper as being possible thanks to the fact that “cryptographic computations performed by the CPU change the power consumption of the device which affects the brightness of the device’s power LED.”

Hackers Put Power LEDs In The Frame

Video footage of a full-frame power LED is shot, and the camera’s rolling shutter is then used to increase the sampling rate to 60,000 measurements per second in the case of the iPhone 13 Pro Max. This then allows the video frames to be analyzed “in the RGB space” and associated RGB values derived to “recover the secret key by inducing the power consumption of the device from the RGB values.”

In the case of the Samsung Galaxy S8 target device, which holds a 378-bit Supersingular Isogeny Key Encapsulation (SIKE) secret key, a side-channel exploit was used. This involved analyzing video of Logitech Z120 USB speakers attached to the S8, or rather the power LED of those speakers which were connected to the same USB hub as was being used to charge the smartphone.

Attack Plausibility Outside Hacker Lab

You’ve probably already started wondering just how worried you need to be by this type of research. And the truth is that, outside of the hacking lab, it’s not likely something that will be used much. Not at least as success requires a number of elements that really don’t stand up to real-world usage.

Ben Nassi is a BlackHat board member as well as a frequent speaker at hacker conferences including BlackHat, DEFCON and Hack-in-the-Box (HITB), as well as being one the researchers involved with this newly evolved attack methodology. In his research FAQ, Nessi concedes that the ability to exploit those stolen cryptographic keys (a smart-card reader was also targeted) relies on a vulnerability in the cryptographic libraries themselves. The power LEDs are simply the method used to “exploit the vulnerability visually.” Ensuring you are using the most up-to-date libraries would nullify the attack exploiting the HertzBleed and Minerva vulnerabilities here, Nessi states.

Then there’s the line-of-sight requirement. The video camera needs to be able to “see” the power LED in question. For the smart-card attack, this meant a camera could be as far as 62 feet away, but for the Samsung Galaxy S8 attack, the iPhone had to be in the same room. Not only that, the S8 attack required, erm, 18 days worth of video.

If these weren’t mitigating factors enough, if you are really paranoid about your security and think someone could be using such cutting-edge methods to grab your cryptographic keys, then a bit of sticky tape over all power LEDs would do the trick.

Which isn’t to belittle the research; far from it, as it’s just this kind of envelope pushing that ultimately makes us safer from malicious actors. But it needs to be met with a proportionate response. There really isn’t any need to worry about lab-based research. Focus instead on all the bad things that are already out there.

Source: Forbes

Key-leaking side channels are a fact of life. Now they can be done by video-recording power LEDs.

Enlarge / Left: a smart card reader processing the encryption key of an inserted smart card. Right: a surveillance camera video records the reader’s power LED from 60 feet away.


Researchers have devised a novel attack that recovers the secret encryption keys stored in smart cards and smartphones by using cameras in iPhones or commercial surveillance systems to video record power LEDs that show when the card reader or smartphone is turned on.

The attacks enable a new way to exploit two previously disclosed side channels, a class of attack that measures physical effects that leak from a device as it performs a cryptographic operation. By carefully monitoring characteristics such as power consumption, sound, electromagnetic emissions, or the amount of time it takes for an operation to occur, attackers can assemble enough information to recover secret keys that underpin the security and confidentiality of a cryptographic algorithm.

Side-channel exploitation made simple

As Wired reported in 2008, one of the oldest known side channels was in a top-secret encrypted teletype terminal that the US Army and Navy used during World War II to transmit communications that couldn’t be read by German and Japanese spies. To the surprise of the Bell Labs engineers who designed the terminal, it caused readings from a nearby oscilloscope each time an encrypted letter was entered. While the encryption algorithm in the device was sound, the electromagnetic emissions emanating from the device were enough to provide a side channel that leaked the secret key.

Side channels have been a fact of life ever since, with new ones being found regularly. The recently discovered side channels tracked as Minerva and Hertzbleed came to light in 2019 and 2022, respectively. Minerva was able to recover the 256-bit secret key of a US-government-approved smart card by measuring timing patterns in a cryptographic process known as scalar multiplication. Hertzbleed allowed an attacker to recover the private key used by the post-quantum SIKE cryptographic algorithm by measuring the power consumption of the Intel or AMD CPU performing certain operations. Given the use of time measurement in one and power measurement in the other, Minerva is known as a timing side channel, and Hertzbleed can be considered a power side channel.

On Tuesday, academic researchers unveiled new research demonstrating attacks that provide a novel way to exploit these types of side channels. The first attack uses an Internet-connected surveillance camera to take a high-speed video of the power LED on a smart card reader—or of an attached peripheral device—during cryptographic operations. This technique allowed the researchers to pull a 256-bit ECDSA key off the same government-approved smart card used in Minerva. The other allowed the researchers to recover the private SIKE key of a Samsung Galaxy S8 phone by training the camera of an iPhone 13 on the power LED of a USB speaker connected to the handset, in a similar way to how Hertzbleed pulled SIKE keys off Intel and AMD CPUs.

Power LEDs are designed to indicate when a device is turned on. They typically cast a blue or violet light that varies in brightness and color depending on the power consumption of the device they are connected to.

There are limitations to both attacks that make them unfeasible in many (but not all) real-world scenarios (more on that later). Despite this, the published research is groundbreaking because it provides an entirely new way to facilitate side-channel attacks. Not only that, but the new method removes the biggest barrier holding back previously existing methods from exploiting side channels: the need to have instruments such as an oscilloscope, electric probes, or other objects touching or being in proximity to the device being attacked.

In Minerva’s case, the device hosting the smart card reader had to be compromised for researchers to collect precise-enough measurements. Hertzbleed, by contrast, didn’t rely on a compromised device but instead took 18 days of constant interaction with the vulnerable device to recover the private SIKE key. To attack many other side channels, such as the one in the World War II encrypted teletype terminal, attackers must have specialized and often expensive instruments attached or near the targeted device.

The video-based attacks presented on Tuesday reduce or completely eliminate such requirements. All that’s required to steal the private key stored on the smart card is an Internet-connected surveillance camera that can be as far as 62 feet away from the targeted reader. The side-channel attack on the Samsung Galaxy handset can be performed by an iPhone 13 camera that’s already present in the same room.

“One of the most significant things of this paper is the fact that you don’t need to connect the probe, connect a scope, or use a software-defined radio,” Ben Nassi, the lead researcher of the attack, said in an interview. “It’s not intrusive, and you can use common or popular devices such as a smartphone in order to apply the attack. For the case of the Internet-connected video camera, you don’t even need to approach the physical scene in order to apply the attack, which is something you cannot do with a software-defined radio or with connecting probes or things like this.”

The technique has another benefit over more traditional side-channel attacks: precision and accuracy. Attacks such as Minerva and Hertzbleed leak information through networks, which introduces latency and adds noise that must be compensated for by collecting data from large numbers of operations. This limitation is what causes the Minerva attack to require a targeted device to be compromised and the Hertzbleed attack to take 18 days.

Rocking the rolling shutter

To many people’s surprise, a standard video camera recording a power LED provides a means of data collection that is much more efficient for measuring information leaking through a side channel. When a CPU performs different cryptographic operations, a targeted device consumes varying amounts of power. The variations cause changes in brightness and sometimes colors of the power LEDs of the device or of peripherals connected to the device.

To capture the LED variations in sufficient detail, the researchers activate the rolling shutter available in newer cameras. Rolling shutter is a form of image capture akin in someways to time-lapse photography. It rapidly records a frame line by line in a vertical, horizontal, or rotational fashion. Traditionally, a camera could only take pictures or videos at the speed of its frame rate, which maxed out at 60 to 120 frames per second.

A GIF image illustrating the concept behind rolling shutter capturing a spinning disc.

Activating a rolling shutter can upsample the sampling rate to collect roughly 60,000 measurements per second. By completely filling a frame with the power LED that’s presently on or connected to a device while it performs cryptographic operations, the researchers exploited the rolling shutter, making it possible for an attacker to collect enough detail to deduce the secret key stored on a smart card, phone, or other device.

“This is possible because the intensity/brightness of the device’s power LED correlates with its power consumption, due to the fact that in many devices, the power LED is connected directly to the power line of the electrical circuit which lacks effective means (e.g., filters, voltage stabilizers) of decoupling the correlation,” the researchers wrote in Tuesday’s paper.

They continued:

We empirically analyze the sensitivity of video cameras and show that they can be used to conduct cryptanalysis because: (1) the limited eight-bit resolution (a discrete space of 256 values) of a single RGB channel of video footage of a device’s power LED is sufficient for detecting differences in the device’s power consumption which are caused by the cryptographic computations, and (2) the video camera’s rolling shutter can be exploited to upsample the sampling rate of the intensity/brightness of the power LED in the video footage to the level needed to perform cryptanalysis, i.e., increasing the number of measurements (sampling rate) of the intensity/brightness of the power LED in video footage by three orders of magnitude from the FPS rate (which provides 60–120 measurements per second) to the rolling shutter rate (which provides 60K measurements per second in the iPhone 13 Pro Max), by zooming the video camera on the power LED of the target device so the view of the LED fills the entire frame of the video footage. By doing so, attackers can use a video camera as a remote invasive alternative to the professional dedicated sensors which are usually used to conduct cryptanalysis (e.g., a scope, software-defined radio).

Videos here and here and displayed below show the video-capture process of a smart card reader and Samsung Galaxy phone, respectively, as they perform cryptographic operations. To the naked eye, the captured video looks unremarkable.

Zoom-in on power LED of smartcard reader.
Hertzbleed-style attack on Samsung Galaxy.

To the naked eye, the captured video looks unremarkable.

Hertzbleed.

But by analyzing the video frames for different RGB values in the green channel, an attacker can identify the start and finish of a cryptographic operation.

Some restrictions apply

Here are the threat models assumed in the research:

A target device is creating a digital signature or performing a similar cryptographic operation on a device. The device has either a standard on/off type 1 or indicative power type 2 power LED, which maintains a constant color or a changing color in response to triggered cryptographic operations. If the device doesn’t have a type 1 or type 2 power LED, it must be connected to a peripheral device that does. The brightness or color of these power LEDs must correlate to the power consumption of the device.

The attacker is a malicious entity in a position to constantly video-record the power LED of either the device or a peripheral device such as USB speakers while the cryptographic operation is taking place.

In the smart card reader’s case, the attacker acquires video by first hacking a surveillance camera that’s up to 60 feet away from—and has line of sight to—the reader’s power LED. The camera compromise must allow the attacker to control the zoom and rotation of the camera. Given numerous instances of Internet-connected video cameras being actively hacked by researchers, real-world botnet operators, and other threat actors, the assumption in the current attack isn’t an especially tall order.

When the camera is 60 feet away, the room lights must be turned off, but they can be turned on if the surveillance camera is at a distance of about 6 feet. (An attacker can also use an iPhone to record the smart card reader power LED.) The video must be captured for 65 minutes, during which the reader must constantly perform the operation.

For the Samsung Galaxy, the attacker must be able to record the power LED of USB-connected speakers from a fairly close range while the handset performs a SIKE signing operation.

The attack assumes there is an existing side channel that leaks power consumption, timing, or other physical manifestations of the device as it performs a cryptographic operation. The smart cards inserted into the readers used a code library that had yet to be patched against the Minerva attack. A library used by the Samsung Galaxy remained vulnerable against Hertzbleed. It’s likely that at least some side channels discovered in the future would also allow the attack to work.

The threat model significantly limits the scenarios under which the current attack works, so attacks aren’t likely to work against government-approved readers used on military bases or other high-security settings (the researchers didn’t test any of these).

That’s because the card readers themselves are likely hardened, and even if they’re not, smart cards issued to personnel in these settings are rotated every couple of years to ensure they contain the latest security updates. Even if both a reader and smart card are vulnerable, the reader must process the card for a full 65 minutes, something that’s impossible during a standard card swipe at a security checkpoint.

But not all settings are as carefully restricted. All six of the smart card readers found to facilitate the attack are available on Amazon and are compatible with common access cards (known as CACs) used by the US military. Four of the readers are advertised with the words “DOD,” “military,” or both. It isn’t unusual for military or government personnel to use such card readers when logging in to non-classified networks from remote locations.

“In general, as long as the particular manufacturer and model are supported by your OS, the only other prerequisites to access DoD resources are (1) that you have the current root and intermediate DoD CAs installed for your OS to trust both your smart card certificate(s) and the certs of sites/services you’re connecting to and (2) that the resource in question is directly accessible from the public Internet (vs. connecting an internal VPN first),” Matt said in an interview. That account was similar to one given by a former Airforce contractor who said, “I never had specific CAC readers dictated to me. If they worked, we were fine.”

Restrictions in corporations, state or local governments, and other organizations are likely more lenient still.

Another limitation with the attack on the Samsung Galaxy is that the SIKE algorithm was taken out of the running as a post-quantum encryption contender following the discovery of an attack that uses complex mathematics and a single traditional PC to recover the secret key protecting encrypted transactions

In an emailed statement, Samsung officials wrote: “We can confirm that the hypothetical attack developed by researchers on the Galaxy S8 was reported to us in 2022, reviewed, and deemed low risk as the particular algorithm is not used on our devices. Consumer privacy is of the utmost importance, and we will keep our security protocols to the highest standard for all devices.”

Interesting, important, impressive

Despite the attack’s shortcomings, the results of the research are “definitely interesting and important,” particularly in the wake of the discovery of Hertzbleed and a similar attack known as Platypus, said Daniel Gruss, a researcher at Graz University of Technology who has helped discover several side channels, including ones here and here in Intel CPUs. In an email, he wrote:

There is a line of research that becomes increasingly relevant around attacks like Platypus, Hertzbleed, and attacks like this one. The basic issue is that power side-channel attacks are incredibly powerful in what information they can leak. For the past decades, this required physical equipment, making those attacks unrealistic in many real-world settings. Now, with remote software-based attacks or the video-recording-based/air-gapped attack presented in this paper, this really changes a lot.

Dan Boneh, a computer scientist and cryptographer at Stanford University, said in an interview that even taking the limitations into account, “it is still impressive that the attack works at all.”

Gruss also noted the common observation made by many researchers that attacks only get better over time with the discovery of new techniques and vulnerabilities.

Another factor potentially offsetting some of the limitations is the rapid pace of advances being made in cameras, which in a few years may increase the range or shorten the time required for the attack. The researchers wrote:

We also raise concern regarding the real potential of video-based cryptanalysis in our days, given existing improvements in video cameras’ specifications. In our research, we focused on commonly used and popular video cameras to demonstrate video-based cryptanalysis (i.e., 8-bit space for a single RGB channel, Full-HD resolution, and maximum supported shutter speed). However, new versions of smartphones already support video footage of 10-bit resolution (e.g., iPhone 14 Pro MAX and Samsung Galaxy S23 Ultra). Moreover, professional video cameras with a resolution of 12-14 bits already exist, 2 Such video cameras may provide much greater sensitivity, which may allow attackers to perform attacks with the ability to detect very subtle changes in the device’s power consumption via the intensity of the power LED. In addition, many Internet-
connected security cameras with greater optical-zoom capabilities than the video camera used in our research (25X) already exist (30X, 36X) and are likely already widely deployed. Such security cameras may allow attackers to perform video-based cryptanalysis against target devices from a greater distance than that demonstrated in this paper. Finally, new professional video cameras for photographers currently support a shutter speed of 1/180,000 (e.g, Fujifilm X-H2.3) The use of such video cameras may allow attackers to obtain measurements at a higher sampling rate which may expose other devices to the risk of video-based cryptanalysis.

The research is the result of a collaboration between The Urban Tech Hub at Cornell Tech and The Cyber Security Research Center at the Ben-Gurion University of the Negev. Besides Nassi, team members included Etay Iluz, Or Cohen, Ofek Vayner, Dudi Nassi, Boris Zadov, and Yuval Elovici.

The researchers recommend several countermeasures that manufacturers can take to harden devices against video-based cryptanalysis. Chief among them is avoiding the use of indicative power LEDs by integrating a capacitor that functions as a “low pass filter.” Another option is to integrate an operational amplifier between the power line and the power LED.

It’s not clear if or when manufacturers of affected devices might add such countermeasures. For now, people who are unsure about the vulnerability of their devices should consider placing opaque tape on power LEDs or using other means to block them from view.

Source: ARS TECHNICA

An unconventional data exfiltration method leverages a previously undocumented covert channel to leak sensitive information from air-gapped systems.

“The information emanates from the air-gapped computer over the air to a distance of 2 m and more and can be picked up by a nearby insider or spy with a mobile phone or laptop,” Dr. Mordechai Guri, the head of R&D in the Cyber Security Research Center in the Ben Gurion University of the Negev in Israel and the head of Offensive-Defensive Cyber Research Lab, said in a new paper shared with The Hacker News.

The mechanism, dubbed COVID-bit, leverages malware planted on the machine to generate electromagnetic radiation in the 0-60 kHz frequency band that’s subsequently transmitted and picked up by a stealthy receiving device in close physical proximity.

This, in turn, is made possible by exploiting the dynamic power consumption of modern computers and manipulating the momentary loads on CPU cores.

COVID-bit is the latest technique devised by Dr. Guri this year after SATAnGAIROSCOPE, and ETHERLED, which are designed to jump over air-gaps and harvest confidential data.

Air-gapped networks, despite their high level of isolation, can be compromised by various strategies such as infected USB drives, supply chain attacks, and even rogue insiders.

Exfiltrating the data after breaching the network, however, is a challenge due to the lack of internet connectivity, necessitating that attackers concoct special methods to deliver the information.

The COVID-bit is one such covert channel that’s used by the malware to transmit information by taking advantage of the electromagnetic emissions from a component called switched-mode power supply (SMPS) and using a mechanism called frequency-shift keying (FSK) to encode the binary data.

“By regulating the workload of the CPU, it is possible to govern its power consumption and hence control the momentary switching frequency of the SMPS,” Dr. Guri explains.

“The electromagnetic radiation generated by this intentional process can be received from a distance using appropriate antennas” that cost as low as $1 and can be connected to a phone’s 3.5 mm audio jack to capture the low-frequency signals at a bandwidth of 1,000 bps.

The emanations are then demodulated to extract the data. The attack is also evasive in that the malicious code doesn’t require elevated privileges and can be executed from within a virtual machine.

An evaluation of the data transmissions reveals that keystrokes can be exfiltrated in near real-time, with IP and MAC addresses taking anywhere between less than 0.1 seconds to 16 seconds, depending on the bitrate.

Countermeasures against the proposed covert channel include carrying out dynamic opcode analysis to flag threats, initiating random workloads on the CPU processors when anomalous activity is detected, and monitoring or jamming signals in the 0-60 kHz spectrum.

Source: The Hacker News

Israeli researcher Mordechai Guri has discovered a new method to exfiltrate data from air-gapped systems using the LED indicators on network cards. Dubbed ‘ETHERLED’, the method turns the blinking lights into Morse code signals that can be decoded by an attacker.

Capturing the signals requires a camera with a direct line of sight to LED lights on the air-gapped computer’s card. These can be translated into binary data to steal information.

ETHERLED attack diagram (arxiv.org)

Air-gapped systems are computers typically found in highly-sensitive environments (e.g. critical infrastructure, weapon control units) that are isolated from the public internet for security reasons.

However, these systems work in air-gapped networks and still use a network card. If an intruder infects them with specially crafted malware, they could replace the card driver with a version that modifies the LED color and blinking frequency to send waves of encoded data, Mordechai Guri has found.

The ETHERLED method can work with other peripherals or hardware that use LEDs as status or operational indicators like routers, network-attached storage (NAS) devices, printers, scanners, and various other connected devices.

Compared to previously disclosed data exfiltration methods based on optical emanation that take control of LEDs in keyboards and modemsETHERLED is a more covert approach and less likely to raise suspicion.

ETHERLED details

The attack begins with planting on the target computer malware that contains a modified version of the firmware for the network card. This allows taking control of the LED blinking frequency, duration, and color.

Code to control LED indicators (arxiv.org)

Alternatively, the malware can directly attack the drive for the network interface controller (NIC) to change connectivity status or to modulate the LEDs required for generating the signals.

The three potential attack methods (arxiv.org)

The researcher found that the malicious driver can exploit documented or undocumented hardware functionality to fiddle with network connection speeds and to enable or disable the Ethernet interface, resulting in light blinks and color changes.

Network card indicators lighting up to convey signals (arxiv.org)

Guri’s tests show that each data frame begins with a sequence of ‘1010’, to mark the start of the package, followed by a payload of 64 bits.

Signal contents (arxiv.org)

For data exfiltration through single status LEDs, Morse code dots and dashes lasting between 100 ms and 300 ms were generated, separated by indicator deactivation spaces between 100 ms and 700 ms.

The bitrate of the Morse code can be increased by up to ten times (10m dots, 30m dashes, and 10-70ms spaces) when using the driver/firmware attack method.

To capture the signals remotely, threat actors can use anything from smartphone cameras (up to 30 meters), drones (up to 50m), hacked webcams (10m), hacked surveillance cameras (30m), and telescopes or cameras with  telephoto or superzoom lenses (over 100 meters).

The time needed to leak secrets such as passwords through ETHERLED ranges between 1 second and 1.5 minutes, depending on the attack method used, 2.5 sec to 4.2 minutes for private Bitcoin keys, and 42 seconds to an hour for 4096-bit RSA keys.

Times required to transmit secrets (arxiv.org)

Other exfiltration channels

Mordechai also published a paper on ‘GAIROSCOPE‘, an attack on air-gapped systems relying on the generation of resonance frequencies on the target system, captured by a nearby (up to 6 meters) smartphone’s gyroscope sensor.

In July, the same researcher presented the ‘SATAn’ attack, which uses SATA cables inside computers as antennas, generating data-carrying electromagnetic waves that can be captured by nearby (up to 1.2 meters) laptops.

The complete collection of Dr. Mordechai Guri’s air-gap covert channel methods can be found in a dedicated section on the Ben-Gurion University of the Negev website.

Source: bleepingcomputer.com

A new method devised to leak information and jump over air-gaps takes advantage of Serial Advanced Technology Attachment (SATA) or Serial ATA cables as a communication medium, adding to a long list of electromagnetic, magnetic, electric, optical, and acoustic methods already demonstrated to plunder data.

“Although air-gap computers have no wireless connectivity, we show that attackers can use the SATA cable as a wireless antenna to transfer radio signals at the 6GHz frequency band,” Dr. Mordechai Guri, the head of R&D in the Cyber Security Research Center in the Ben Gurion University of the Negev in Israel, wrote in a paper published last week.

The technique, dubbed SATAn, takes advantage of the prevalence of the computer bus interface, making it “highly available to attackers in a wide range of computer systems and IT environments.”

Put simply, the goal is to use the SATA cable as a covert channel to emanate electromagnetic signals and transfer a brief amount of sensitive information from highly secured, air-gapped computers wirelessly to a nearby receiver more than 1m away.

An air-gapped network is one that’s physically isolated from any other networks in order to increase its security. Air-gapping is seen as an essential mechanism to safeguard high-value systems that are of huge interest to espionage-motivated threat actors.

That said, attacks targeting critical mission-control systems have grown in number and sophistication in recent years, as observed recently in the case of Industroyer 2 and PIPEDREAM (aka INCONTROLLER).

Dr. Guri is no stranger to coming up with novel techniques to extract sensitive data from offline networks, with the researcher concocting four different approaches since the start of 2020 that leverage various side-channels to surreptitiously siphon information.

These include BRIGHTNESS (LCD screen brightness), POWER-SUPPLaY (power supply unit), AIR-FI (Wi-Fi signals), and LANtenna (Ethernet cables). The latest approach is no different, wherein it takes advantage of the Serial ATA cable to achieve the same goals.

Serial ATA is a bus interface and an Integrated Drive Electronics (IDE) standard that’s used to transfer data at higher rates to mass storage devices. One of its chief uses is to connect hard disk drives (HDD), solid-state drives (SSD), and optical drives (CD/DVD) to the computer’s motherboard.

Unlike breaching a traditional network by means of spear-phishing or watering holes, compromising an air-gapped network requires more complex strategies such as a supply chain attack, using removable media (e.g., USBStealer and USBFerry), or rogue insiders to plant malware.

For an adversary whose aim is to steal confidential information, financial data, and intellectual property, the initial penetration is only the start of the attack chain that’s followed by reconnaissance, data gathering, and data exfiltration through workstations that contain active SATA interfaces.

In the final data reception phase, the transmitted data is captured through a hidden receiver or relies on a malicious insider in an organization to carry a radio receiver near the air-gapped system. “The receiver monitors the 6GHz spectrum for a potential transmission, demodulates the data, decodes it, and sends it to the attacker,” Dr. Guri explained.

As countermeasures, it’s recommended to take steps to prevent the threat actor from gaining an initial foothold, use an external Radio frequency (RF) monitoring system to detect anomalies in the 6GHz frequency band from the air-gapped system, or alternatively polluting the transmission with random read and write operations when a suspicious covert channel activity is detected.

Source: The Hacker News

Can attackers create a face mask that would defeat modern facial recognition (FR) systems? A group of researchers from from Ben-Gurion University of the Negev and Tel Aviv University have proven that it can be done.

“We validated our adversarial mask’s effectiveness in real-world experiments (CCTV use case) by printing the adversarial pattern on a fabric face mask. In these experiments, the FR system was only able to identify 3.34% of the participants wearing the mask (compared to a minimum of 83.34% with other evaluated masks),” they noted.

A mask that works against many facial recognition models

The COVID-19 pandemic has made wearing face masks a habitual practice, and it initially hampered many facial recognition systems in use around the world. With time, though, the technology evolved and adapted to accurately identify individuals wearing medical and other masks.

But as we learned time and time again, if there is a good enough incentive, adversaries will always find new ways to achieve their intended goal.

In this particular case, the researchers took over the adversarial role and decided to find out whether they could create a specific pattern/mask that would work against modern deep learning-based FR models.

Their attempt was successful: they used a gradient-based optimization process to create a universal perturbation (and mask) that would falsely classify each wearer – no matter whether male or female – as an unknown identity, and would do so even when faced with different FR models.

This mask works as intended both when printed on paper and or fabric. But, even more importantly, the mask will not raise suspicion in our post-Covid world and can easily be removed when the adversary needs to blend in real-world scenarios.

Possible countermeasures

While their mask works well, theirs is not the only “version” possible.

“The main goal of a universal perturbation is to fit any person wearing it, i.e., there is a single pattern. Having said that, the perturbation depends on the FR model it was used to attack, which means different patterns will be crafted depending on the different victim models,” Alon Zolfi, the PhD student who led the research, told Help Net Security.

If randomization is added to the process, the resulting patterns can also be slightly different.

“Tailor made masks (fitting a single person) could also be crafted and result in different adversarial patterns to widen the diversity,” he noted.

Facial recognition models can be trained to recognize people wearing their and similar adversarial masks, the researchers pointed out. Alternatively, during the inference phase, every masked face image could be preprocessed so that it looks like the person is wearing a standard mask (e.g., blue surgical mask), because current FR models work well with those.

At the moment, FR systems rely on the entire facial area to answer a query whether two faces are of the same person, and Zolfi believes there are three solutions to make them “see through” a masked face image.

The first is adversarial learning, i.e., training FR models with facial images that contain adversarial patterns (whether universal or tailor-made).

The second is training FR models to make a prediction based only on the upper area of the face – but this approach has been shown to degrade the performance of the models even on unmasked facial images and is currently unsatisfactory, he noted.

Thirdly, FR models could be trained to generate lower facial area based on the upper facial area.

“There is a popular line of work called generative adversarial network (GAN) that is used to generate what we think of as ‘inputs’ (in this case, given some input we want it to output an image of the lower facial area). This is a ‘heavy’ approach because it requires completely different model architectures, training procedures and larger physical resources during inference,” he concluded.

Source: Help Net Security

SIMON MCGILL/GETTY IMAGES

Anything from a metallic Rubik’s cube to an aluminum trash can inside a room could give away your private conversations.

THE MOST PARANOID among us already know the checklist to avoid modern audio eavesdropping: Sweep your home or office for bugs. Put your phone in a Faraday bag—or a fridge. Consider even stripping internal microphones from your devices. Now one group of researchers offers a surprising addition to that list: Remove every lightweight, metallic object from the room that’s visible from a window.

At the Black Hat Asia hacker conference in Singapore this May, researchers from Israel’s Ben Gurion University of the Negev plan to present a new surveillance technique designed to allow anyone with off-the-shelf equipment to eavesdrop on conversations if they can merely find a line of sight through a window to any of a wide variety of reflective objects in a given room. By pointing an optical sensor attached to a telescope at one of those shiny objects—the researchers tested their technique with everything from an aluminum trash can to a metallic Rubik’s cube—they could detect visible vibrations on an object’s surface that allowed them to derive sounds and thus listen to speech inside the room. Unlike older experiments that similarly watched for minute vibrations to remotely listen in on a target, this new technique let researchers pick up lower-volume conversations, works with a far greater range of objects, and enables real-time snooping rather than after-the-fact reconstruction of a room’s audio.

“We can recover speech from lightweight, shiny objects placed in proximity to an individual who is speaking by analyzing the light reflected from them,” says Ben Nassi, the Ben Gurion professor who carried out the research along with Ras Swissa, Boris Zadov, and Yuval Elovici. “And the beauty of it is that we can do it in real time, which for espionage allows you to act on the information revealed in the content of the conversation.”

The researchers’ trick takes advantage of the fact that sound waves from speech create changes in air pressure that can imperceptibly vibrate objects in a room. In their experimental setup, they attached a photodiode, a sensor that converts light into voltage, to a telescope; the longer-range its lenses and the more light they allow to hit the sensor, the better. That photodiode was then connected to an analog-to-digital converter and a standard PC, which translated the sensor’s voltage output to data that represents the real-time fluctuations of the light reflecting from whatever object the telescope points at. The researchers could then correlate those tiny light changes to the object’s vibration in a room where someone is speaking, allowing them to reconstruct the nearby person’s speech.

The researchers showed that in some cases, using a high-end analog-to-digital converter, they could recover audible speech with their technique when a speaker is about 10 inches from a shiny metallic Rubik’s cube and speaking at 75 decibels, the volume of a loud conversation. With a powerful enough telescope, their method worked from a range of as much as 115 feet. Aside from the Rubik’s cube, they tested the trick with half a dozen objects: a silvery bird figurine, a small polished metal trash can, a less-shiny aluminum ice-coffee can, an aluminum smartphone standard, and even thin metal venetian blinds.

The recovered sound was clearest when using objects like the smartphone stand or trash can, and least clear with the venetian blinds—but still audible to make out every word in some cases. Nassi points out that the ability to capture sounds from window coverings is particularly ironic. “This is an object designed to increase privacy in a room,” Nassi says. “However, if you’re close enough to the venetian blinds, they can be exploited as diaphragms, and we can recover sound from them.”

The Ben Gurion researchers named their technique the Little Seal Bug in an homage to a notorious Cold War espionage incident known as the Great Seal Bug: In 1945, the USSR gave a gift of a wooden seal placard displaying the US coat of arms to the embassy in Moscow, which was discovered years later to contain an RFID spy bug that was undetectable to bug sweepers of that time. Nassi suggests that the Little Seal Bug technique could similarly work when a spy sends a seemingly innocuous gift of a metallic trophy or figurine to someone, which the eavesdropper can then exploit as an ultra-stealthy listening device. But Nassi argues it’s just as likely that a target has a suitable lightweight shiny object on their desk already, in view of a window and any optical snooper.

Nassi’s team isn’t the first to suggest that long-range, optical spying can pick up vocal conversations. In 2014, a team of MIT, Adobe, and Microsoft researchers created what they called the Visual Microphone, showing it was possible to analyze a video of a houseplant’s leaves or an empty potato chip bag inside a room to similarly detect vibrations and reconstruct sound. But Nassi says his researchers’ work can pick up lower-volume sounds and requires far less processing than the video analysis used by the Visual Microphone team. The Ben Gurion team found that using a photodiode was more effective and more efficient than using a camera, allowing easier long-range listening with a new range of objects and offering real-time results.

“This definitely takes a step toward something that’s more useful for espionage,” says Abe Davis, one of the former MIT researchers who worked on the Visual Microphone and is now at Cornell. He says he has always suspected that using a different sort of camera, purpose-built for this sort of optical eavesdropping, would advance the technique. “It’s like we invented the shotgun, and this work is like, ‘We improve on the shotgun, we give you a rifle,'” Davis says.

It’s still far from clear how practical the method will be in a real-world setting, says Thomas Ristenpart, another Cornell computer scientist who has long studied side-channel attacks—techniques like the Little Seal Bug that can extract secrets from unexpected side effects of communications. He points out that even the 75-decibel words the Israeli researchers detected in their tests would be relatively loud, and background noise from an air conditioner, music, or other speakers in the room might interfere with the technique. “But as a proof of concept, it’s still interesting,” Ristenpart says.

Ben Gurion’s Ben Nassi argues, though, that the technique has proven to work well enough that an intelligence agency with a budget in the millions of dollars rather than mere thousands his team spent could likely hone their spy method into a practical and powerful tool. In fact, he says, they may have already. “This is something that could have been exploited many years ago—and probably was exploited for many years,” says Nassi. “The things we’re revealing to the public probably have already been used by clandestine agencies for a long time.”

All of which means that anyone with secrets to keep would be wise to sweep their desk for shiny objects that might serve as inadvertent spy bugs. Or lower the window shades—just not the venetian blinds.

Source: WIRED

Good news, everyone! Security researcher [Mordechai Guri] has given us yet another reason to look askance at our computers and wonder who might be sniffing in our private doings.

This time, your suspicious gaze will settle on the lowly Ethernet cable, which he has used to exfiltrate data across an air gap. The exploit requires almost nothing in the way of fancy hardware — he used both an RTL-SDR dongle and a HackRF to receive the exfiltrated data, and didn’t exactly splurge on the receiving antenna, which was just a random chunk of wire. The attack, dubbed “LANtenna”, does require some software running on the target machine, which modulates the desired data and transmits it over the Ethernet cable using one of two methods: by toggling the speed of the network connection, or by sending raw UDP packets. Either way, an RF signal is radiated by the Ethernet cable, which was easily received and decoded over a distance of at least two meters. The bit rate is low — only a few bits per second — but that may be all a malicious actor needs to achieve their goal.

To be sure, this exploit is quite contrived, and fairly optimized for demonstration purposes. But it’s a pretty effective demonstration, but along with the previously demonstrated hard drive activity lightspower supply fans, and even networked security cameras, it adds another seemingly innocuous element to the list of potential vectors for side-channel attacks.

[via The Register]

Source: hackaday.com

A new study used digitally and physically applied makeup to test the limits of state-of-the-art facial recognition software.

YURI KADOBNOV / GETTY IMAGES

Researchers have found a new and surprisingly simple method for bypassing facial recognition software using makeup patterns. 

new study from Ben-Gurion University of the Negev found that software-generated makeup patterns can be used to consistently bypass state-of-the-art facial recognition software, with digitally and physically-applied makeup fooling some systems with a success rate as high as 98 percent.

In their experiment, the researchers defined their 20 participants as blacklisted individuals so their identification would be flagged by the system. They then used a selfie app called YouCam Makeup to digitally apply makeup to the facial images according to the heatmap which targets the most identifiable regions of the face.. A makeup artist then emulated the digital makeup onto the participants using natural-looking makeup in order to test the target model’s ability to identify them in a realistic situation.

“​​I was surprised by the results of this study,” Nitzan Guettan, a doctoral student and lead author of the study, told Motherboard. “[The makeup artist] didn’t do too much tricks, just see the makeup in the image and then she tried to copy it into the physical world. It’s not a perfect copy there. There are differences but it still worked.”

The researchers tested the attack method in a simulated real-world scenario in which participants wearing the makeup walked through a hallway to see whether they would be detected by a facial recognition system. The hallway was equipped with two live cameras that streamed to the MTCNN face detector while evaluating the system’s ability to identify the participant.

“Our attacker assumes a black-box scenario, meaning that the attacker cannot access the target FR model, its architecture, or any of its parameters,” the paper explains. “Therefore, [the] attacker’s only option is to alter his/her face before being captured by the cameras that feeds the input to the target FR model.” 

The experiment saw 100 percent success in the digital experiments on both the FaceNet model and the LResNet model, according to the paper. In the physical experiments, the participants were detected in 47.6 percent of the frames if they weren’t wearing any makeup and 33.7 percent of the frames if they wore randomly applied makeup. Using the researchers’ method of applying makeup to the highly identifiable parts of the attacker’s face, they were only recognized in 1.2 percent of the frames. 

The researchers are not the first to demonstrate how makeup can be used to fool facial recognition systems. In 2010, artist Adam Harvey’s CV Dazzle project presented a host of makeup looks designed to thwart algorithms, inspired by “dazzle” camouflage used by naval vessels in World War I.

Various studies have shown how facial recognition systems can be bypassed digitally, such as by creating “master faces” that could impersonate others. The paper references a study where a printable sticker was attached to a hat to bypass the facial recognition system, and another where eyeglass frames were printed

While all of these methods might hide someone from facial recognition algorithms, they have the side effect of making you very visible to other humans—especially if attempted somewhere with high security, like an airport.

In the researchers’ experiment, they addressed this by having the makeup artist only use conventional makeup techniques and neutral color palettes to achieve a natural look. Considering its success in the study, the researchers say this method could technically be replicated by anyone using store bought makeup. 

Perhaps unsurprisingly, Guettan says she generally does not trust facial recognition technology in its current state. “I don’t even use it on my iPhone,” she told Motherboard. “There are a lot of problems with this domain of facial recognition. But I think the technology is becoming better and better.”

Source: vice.com

A new class of passive TEMPEST attack converts LED output into intelligible audio.

Researchers at Ben-Gurion University of the Negev have demonstrated a novel way to spy on electronic conversations. A new paper released today outlines a novel passive form of the TEMPEST attack called Glowworm, which converts minute fluctuations in the intensity of power LEDs on speakers and USB hubs back into the audio signals that caused those fluctuations.

The Cyber@BGU team—consisting of Ben Nassi, Yaron Pirutin, Tomer Gator, Boris Zadov, and Professor Yuval Elovici—analyzed a broad array of widely used consumer devices including smart speakers, simple PC speakers, and USB hubs. The team found that the devices’ power indicator LEDs were generally influenced perceptibly by audio signals fed through the attached speakers.

Although the fluctuations in LED signal strength generally aren’t perceptible to the naked eye, they’re strong enough to be read with a photodiode coupled to a simple optical telescope. The slight flickering of power LED output due to changes in voltage as the speakers consume electrical current are converted into an electrical signal by the photodiode; the electrical signal can then be run through a simple Analog/Digital Converter (ADC) and played back directly.

A novel passive approach

With sufficient knowledge of electronics, the idea that a device’s supposedly solidly lit LEDs will “leak” information about what it’s doing is straightforward. But to the best of our knowledge, the Cyber@BGU team is the first to both publish the idea and prove that it works empirically.

The strongest features of the Glowworm attack are its novelty and its passivity. Since the approach requires absolutely no active signaling, it would be immune to any sort of electronic countermeasure sweep. And for the moment, a potential target seems unlikely to either expect or deliberately defend against Glowworm—although that might change once the team’s paper is presented later this year at the CCS 21 security conference.

The attack’s complete passivity distinguishes it from similar approaches—a laser microphone can pick up audio from the vibrations on a window pane. But defenders can potentially spot the attack using smoke or vapor—particularly if they know the likely frequency ranges an attacker might use.

Glowworm requires no unexpected signal leakage or intrusion even while actively in use, unlike “The Thing.” The Thing was a Soviet gift to the US Ambassador in Moscow, which both required “illumination” and broadcast a clear signal while illuminated. It was a carved wooden copy of the US Great Seal, and it contained a resonator that, if lit up with a radio signal at a certain frequency (“illuminating” it), would then broadcast a clear audio signal via radio. The actual device was completely passive; it worked a lot like modern RFID chips (the things that squawk when you leave the electronics store with purchases the clerk forgot to mark as purchased).

Accidental defense

Despite Glowworm’s ability to spy on targets without revealing itself, it’s not something most people will need to worry much about. Unlike the listening devices we mentioned in the section above, Glowworm doesn’t interact with actual audio at all—only with a side effect of electronic devices that produce audio.

This means that, for example, a Glowworm attack used successfully to spy on a conference call would not capture the audio of those actually in the room—only of the remote participants whose voices are played over the conference room audio system.

The need for a clean line of sight is another issue that means that most targets will be defended from Glowworm entirely by accident. Getting a clean line of sight to a windowpane for a laser microphone is one thing—but getting a clean line of sight to the power LEDs on a computer speaker is another entirely.

Humans generally prefer to face windows themselves for the view and have the LEDs on devices face them. This leaves the LEDs obscured from a potential Glowworm attack. Defenses against simple lip-reading—like curtains or drapes—are also effective hedges against Glowworm, even if the targets don’t actually know Glowworm might be a problem.

Finally, there’s currently no real risk of a Glowworm “replay” attack using video that includes shots of vulnerable LEDs. A close-range, 4k at 60 fps video might just barely capture the drop in a dubstep banger—but it won’t usefully recover human speech, which centers between 85Hz-255Hz for vowel sounds and 2KHz-4KHz for consonants.

Turning out the lights

Although Glowworm is practically limited by its need for clear line of sight to the LEDs, it works at significant distance. The researchers recovered intelligible audio at 35 meters—and in the case of adjoining office buildings with mostly glass facades, it would be quite difficult to detect.

For potential targets, the simplest fix is very simple indeed—just make sure that none of your devices has a window-facing LED. Particularly paranoid defenders can also mitigate the attack by placing opaque tape over any LED indicators that might be influenced by audio playback.

On the manufacturer’s side, defeating Glowworm leakage would also be relatively uncomplicated—rather than directly coupling a device’s LEDs to the power line, the LED might be coupled via an opamp or GPIO port of an integrated microcontroller. Alternatively (and perhaps more cheaply), relatively low-powered devices could damp power supply fluctuations by connecting a capacitor in parallel to the LED, acting as a low-pass filter.

For those interested in further details of both Glowworm and its effective mitigation, we recommend visiting the researchers’ website, which includes a link to the full 16-page white paper.

Source: Ars Technica

Latest Publications

2023/7/20

CAN-LOC: spoofing detection and physical intrusion localization on an in-vehicle CAN bus based on deep features of voltage signals

Efrat Levy, Asaf Shabtai, Bogdan Groza, Pal-Stefan Murvay, Yuval Elovici

IEEE Transactions on Information Forensics and Security, 2023

2023/7/20

CAN-LOC: spoofing detection and physical intrusion localization on an in-vehicle CAN bus based on deep features of voltage signals

Efrat Levy, Asaf Shabtai, Bogdan Groza, Pal-Stefan Murvay, Yuval Elovici

IEEE Transactions on Information Forensics and Security, 2023

The Controller Area Network (CAN), which is used for communication between in-vehicle devices, has been shown to be vulnerable to spoofing attacks. Voltage-based spoofing detection (VBS-D) mechanisms are considered state-of-the-art solutions, complementing cryptography-based authentication whose security is limited due to the CAN protocol’s limited message size. Unfortunately, VBS-D mechanisms are vulnerable to poisoning performed by a malicious device connected to the CAN bus, specifically designed to poison the deployed VBS-D mechanism as it adapts to environmental changes that take place when the vehicle is moving. In this paper, we harden VBS-D mechanisms using a deep learning-based mechanism which runs immediately, when the vehicle starts; this mechanism utilizes physical side-channels to detect and locate physical intrusions, even when the malicious devices connected to the …

Link
2023/6/14

X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Omer Hofman, Amit Giloni, Yarin Hayun, Ikuya Morikawa, Toshiya Shimizu, Yuval Elovici, Asaf Shabtai

arXiv preprint arXiv:2306.08422, 2023

2023/6/14

X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Omer Hofman, Amit Giloni, Yarin Hayun, Ikuya Morikawa, Toshiya Shimizu, Yuval Elovici, Asaf Shabtai

arXiv preprint arXiv:2306.08422, 2023

Object detection models, which are widely used in various domains (such as retail), have been shown to be vulnerable to adversarial attacks. Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks. We present X-Detect, a novel adversarial patch detector that can: i) detect adversarial samples in real time, allowing the defender to take preventive action; ii) provide explanations for the alerts raised to support the defender’s decision-making process, and iii) handle unfamiliar threats in the form of new attacks. Given a new scene, X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques to determine whether an alert needs to be raised. X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the COCO dataset and our new Superstore dataset. The physical evaluation was performed using a smart shopping cart setup in real-world settings and included 17 adversarial patch attacks recorded in 1,700 adversarial videos. The results showed that X-Detect outperforms the state-of-the-art methods in distinguishing between benign and adversarial scenes for all attack scenarios while maintaining a 0% FPR (no false alarms) and providing actionable explanations for the alerts raised. A demo is available.

Link
2023/6/4

Discussion Paper: The Threat of Real Time Deepfakes

Guy Frankovits, Yisroel Mirsky

arXiv preprint arXiv:2306.02487, 2023

2023/6/4

Discussion Paper: The Threat of Real Time Deepfakes

Guy Frankovits, Yisroel Mirsky

arXiv preprint arXiv:2306.02487, 2023

Generative deep learning models are able to create realistic audio and video. This technology has been used to impersonate the faces and voices of individuals. These “deepfakes” are being used to spread misinformation, enable scams, perform fraud, and blackmail the innocent. The technology continues to advance and today attackers have the ability to generate deepfakes in real-time. This new capability poses a significant threat to society as attackers begin to exploit the technology in advances social engineering attacks. In this paper, we discuss the implications of this emerging threat, identify the challenges with preventing these attacks and suggest a better direction for researching stronger defences.

Link
2023/5/3

IPatch: a remote adversarial patch

Yisroel Mirsky

Cybersecurity 6 (1), 18, 2023

2023/5/3

IPatch: a remote adversarial patch

Yisroel Mirsky

Cybersecurity 6 (1), 18, 2023

Applications such as autonomous vehicles and medical screening use deep learning models to localize and identify hundreds of objects in a single frame. In the past, it has been shown how an attacker can fool these models by placing an adversarial patch within a scene. However, these patches must be placed in the target location and do not explicitly alter the semantics elsewhere in the image. In this paper, we introduce a new type of adversarial patch which alters a model’s perception of an image’s semantics. These patches can be placed anywhere within an image to change the classification or semantics of locations far from the patch. We call this new class of adversarial examples ‘remote adversarial patches’ (RAP). We implement our own RAP called IPatch and perform an in-depth analysis on without pixel clipping on image segmentation RAP attacks using five state-of-the-art architectures with eight …

Link

Photo Gallery