A Clever Radio Trick Can Tell If A Drone Is Watching You

AS FLYING, CAMERA-WIELDING machines get ever cheaper and more ubiquitous, inventors of anti-drone technologies are marketing every possible idea for protection from hovering eyes in the sky: Drone-spotting radar. Drone-snagging shotgun shells. Anti-drone lasers, falcons, even drone-downing drones. Now one group of Israeli researchers has developed a new technique for that drone-control arsenal—one that can not only detect that a drone is nearby, but determine with surprising precision if it’s spying on you, your home, or your high-security facility. Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-c...

Read More ...

Desktop Scanners Can Be Hijacked to Perpetrate Cyberattacks, According to BGU and Weizmann Institute Researchers

A typical office scanner can be infiltrated and a company’s network compromised using different light sources, according to a n...

Read More ...

Cameras can Steal Data from Computer Hard Drive LED Lights

Researchers at BGU’s Cyber Security Research Center have demonstrated that data can be stolen from an isolated “air-...

Read More ...

Global entities come shopping for Israeli cybersecurity

At Tel Aviv confab, prime minister announces new National Center for Cyber Education to keep Israel’s young generations at t...

Read More ...

Clever Attack Uses the Sound of a Computer’s Fan to Steal Data

IN THE PAST two years a group of researchers in Israel has become highly adept at stealing data from air-gapped computers—those ...

Read More ...

A Bug in Chrome Makes It Easy to Pirate Movies

FOR YEARS HOLLYWOOD has waged a war on piracy, using digital rights management technologies to fight bootleggers who illegally co...

Read More ...

Great. Now Even Your Headphones Can Spy on You

CAUTIOUS COMPUTER USERS put a piece of tape over their webcam. Truly paranoid ones worry about their devices’ microphones—...

Read More ...

Fansmitter: Acoustic Data Exfiltration from (Speakerless) Air-Gapped Computers

Because computers may contain or interact with sensitive information, they are often air-gapped and in this way kept isolated and ...

Read More ...

DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise

DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise. By Security Researcher Mordec...

Read More ...

AS FLYING, CAMERA-WIELDING machines get ever cheaper and more ubiquitous, inventors of anti-drone technologies are marketing every possible idea for protection from hovering eyes in the sky: Drone-spotting radar. Drone-snagging shotgun shells. Anti-drone lasers, falcons, even drone-downing drones. Now one group of Israeli researchers has developed a new technique for that drone-control arsenal—one that can not only detect that a drone is nearby, but determine with surprising precision if it’s spying on you, your home, or your high-security facility.

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject—a window, say—someone might want to guard from potential surveillance. Then they remotely intercept a drone’s radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone’s encrypted video.

“This is the first method to tell what is being captured in a drone’s [first-person-view] channel” despite that encryption, says Ben Nassi, one of the Ben Gurion researchers who wrote a paper on the technique, along with a group that includes legendary cryptographer and co-inventor of the RSA encryption algorithm Adi Shamir. “You can observe without any doubt that someone is watching. If you can control the stimulus and intercept the traffic as well, you can fully understand whether a specific object is being streamed.”

The researchers’ technique takes advantage of an efficiency feature streaming video has used for years, known as “delta frames.” Instead of encoding video as a series of raw images, it’s compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who’s intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted. Researchers at West Point, Cornell Tech, and Tel Aviv University, for instance, used that feature as part of a technique to figure out what movie someone was watching on Netflix, despite Netflix’s use of HTTPS encryption.

The encrypted video streamed by a drone back to its operator is vulnerable to the same kind of analysis, the Ben Gurion researchers say. In their tests, they used a “smart film” to toggle the opacity of several panes of a house’s windows while a DJI Mavic quadcopter watched it from the sky, changing the panes from opaque to transparent and back again in an on-off pattern. Then they showed that with just a parabolic antenna and a laptop, they could intercept the drone’s radio signals to its operator and find that same pattern in the drone’s encrypted data stream to show that the drone must have been looking at the house.

 

By changing the opacity of a “smart film” material over a target house’s window panes, the researchers could produce a recognizable pattern in the encrypted video communications of a drone watching that house.

 

In another test, they put blinking LED lights on a test subject’s shirt, and then were able to pull out the binary code for “SOS” from an encrypted video focused on the person, showing that they could even potentially “watermark” a drone’s video feed to prove that it spied on a specific person or building.

‘You can observe without any doubt that someone is watching.’ Ben Nassi, Ben Gurion University

All of that may seem like an elaborate setup to catch a spy drone in the act, when it could far more easily be spotted with a decent pair of binoculars. But Nassi argues that the technique works at ranges where it’s difficult to spot a drone in the sky at all, not to mention determine precisely where its camera is pointed. They tested their method from a range of about 150 feet, but he says with a more expensive antenna, a range of more than a mile is possible. And while radar or other radio techniques can identify a drone’s presence at that range, he says only the Ben Gurion researchers’ trick actually know where it’s looking. “To really understand what’s being captured, you have to use our method,” Nassi says.

Rigging your house—or body—with blinking LEDs or smart film panels would ask a lot of the average drone-wary civilian, notes Peter Singer, an author and fellow at the New America Foundation who focuses on military and security technology. But Singer suggests the technique could benefit high-security facilities trying to hide themselves from flying snoops. “It might have less implications for personal privacy than for corporate or government security,” Singer says.

DJI didn’t respond to WIRED’s request for comment. Nor did Parrot, whose drones Nassi says would also be susceptible to their technique.

If the Ben Gurion researchers’ technique were widely adopted, determined drone spies would no doubt find ways to circumvent the trick. The researchers note themselves that drone-piloting spies could potentially defeat their technique by, for instance, using two cameras: one for navigation with first-person streaming, and one for surveillance that stores its video locally. But Nassi argues that countermeasure, or others that “pad” video stream data to better disguise it, would come at a cost of real-time visibility or resolution for the drone operator.

The spy-versus spy game of aerial drone surveillance is no doubt just getting started. But for the moment, at least, the Israeli researchers’ work could give spying targets an unexpected new way to watch the watchers—through their own airborne eyes.

 

Source: Wired

A typical office scanner can be infiltrated and a company’s network compromised using different light sources, according to a new paper by researchers from BGU and the Weizmann Institute of Science.

“In this research, we demonstrated how to use a laser or smart bulb to establish a covert channel between an outside attacker and malware installed on a networked computer,” says Ben Nassi, a graduate student in BGU’s Department of Software and Information Systems Engineering as well as a researcher at BGU’s Cyber Security Research Center (CSRC).  “A scanner with the lid left open is sensitive to changes in the surrounding light and might be used as a back door into a company’s network.”

The researchers conducted several demonstrations to transmit a message into computers connected to a flatbed scanner. Using direct laser light sources up to a half-mile (900 meters) away, as well as on a drone outside their office building, the researchers successfully sent a message to trigger malware through the scanner.

 

In another demonstration, the researchers used a Galaxy 4 Smartphone to hijack a smart lightbulb (using radio signals) in the same room as the scanner. Using a program they wrote, they manipulated the smart bulb to emit pulsating light that delivered the triggering message in only seconds.

To mitigate this vulnerability, the researchers recommend organizations connect a scanner to the network through a proxy server — a computer that acts as an intermediary — which would prevent establishing a covert channel. This might be considered an extreme solution, however, since it also limits printing and faxing remotely on all-in-one devices.

“We believe this study will increase the awareness to this threat and result in secured protocols for scanning that will prevent an attacker from establishing such a covert channel through an external light source, smart bulb, TV, or other IoT (Internet of Things) device,” Nassi says.

Prof. Adi Shamir of the Department of Applied Mathematics at the Weizmann Institute conceived of the project to identify new network vulnerabilities by establishing a clandestine channel in a computer network.

Ben Nassi’s Ph.D. research advisor is Prof. Yuval Elovici​, a member of the BGU Department of Software and Information Systems Engineering and director of the Deutsche Telekom Innovation ​Laboratories at BGU. Elovici is also director of the CSRC.​​

Source Link

Researchers at BGU’s Cyber Security Research Center have demonstrated that data can be stolen from an isolated “air-gapped” computer’s hard drive reading the pulses of light on the LED drive using various types of cameras and light sensors.

In the new paper, the researchers demonstrated how data can be received by a Quadcopter drone flight, even outside a window with line-of-sight of the transmitting computer. Click here to watch a video of the demonstration.

Air-gapped computers are isolated — separated both logically and physically from public networks — ostensibly so that they cannot be hacked over the Internet or within company networks. These computers typically contain an organization’s most sensitive and confidential information.

Led by Dr. Mordechai Guri (pictured above), Head of R&D at the Cyber Security Research Center, the research team utilized the hard-drive (HDD) activity LED lights that are found on most desktop PCs and laptops. The researchers found that once malware is on a computer, it can indirectly control the HDD LED, turning it on and off rapidly (thousands of flickers per second) — a rate that exceeds the human visual perception capabilities. As a result, highly sensitive information can be encoded and leaked over the fast LED signals, which are received and recorded by remote cameras or light sensors.

“Our method compared to other LED exfiltration is unique, because it is also covert,” Dr. Guri says. “The hard drive LED flickers frequently, and therefore the user won’t be suspicious about changes in its activity.”

Dr. Guri and the Cyber Security Research Center have conducted a number of studies to demonstrate how malware can infiltrate air-gapped computers and transmit data. Previously, they determined that computer speakers and fans, FM waves and heat are all methods that can be used to obtain data.

In addition to Dr. Guri, the other BGU researchers include Boris Zadov, who received his M.Sc. degree from the Department of Electrical and Computer Engineering and Prof. Yuval Elovici, director of the Cyber Security Research Center. Prof. Elovici is also a member of the University’s Department of Software and Information Systems Engineering and Director of Deutsche Telekom Laboratories at BGU.

Link to original

At Tel Aviv confab, prime minister announces new National Center for Cyber Education to keep Israel’s young generations at the top of the cyber game.

As computer devices and Internet of Things (IoT) connectivity continue to break new boundaries and create changes to our lifestyle, new cybersecurity technologies to defend our tech-savvy lives are crucial.

“Not many years ago, computers were far away. Then they came to our desktops, then to our laptops, and then to our pockets; now they’re in our clothes and, for some, in our body — medical devices. All this needs to be defended,” Erez Kreiner, CEO of Cyber-Rider and former director of Israel’s National Cyber Security Authority, told a press gathering at this week’s Cybertech 2017 conference in Tel Aviv.

He noted that Israel is the place to find many of the best cybersecurity products.

Last year saw 65 startups created in Israel’s cyber space, according to Start-Up Nation Central, a nonprofit organization. Altogether, the country boasts about 450 companies specializing in cyber, according to a Reuters report.

Israel’s venture-capital funding in the cyber sector, according to Start-Up Nation Central, is a record $581 million, second only to the United States.

YL Ventures’ report showing the hottest types of cybersecurity solutions to attract investment in 2016 included mobile security, vulnerability and risk management, network security, SCADA security and incident response.

“We’re still at the beginning for the cyber arena. We still need the security solution for smart homes, we still don’t have security solutions for autonomous cars, or for connected medical devices or MRI machines, or for connected kitchen appliances. Every technology that will be introduced to our lives in the coming years will need a cyber solution,” says Kreiner.

Indeed, our digital society makes us vulnerable to external threats of cyber terror, cybercrime and identity theft.

Control systems, online banking, networks, databases and electronic devices are all susceptible to attack.

“In the cyber arena, I’d say we’re in the September 10th zone,” says Kreiner. “We know very bad things can happen. So we invest in cybersecurity but still we’re very much on the edge.”

In search of Israeli innovation

Cybertech 2017, held for its fourth year at the Israel Trade Fairs & Convention Center, attracted over 10,000 visitors, investors, entrepreneurs and cyber companies. Cybertech is the second largest conference and exhibition of cyber technologies in the world.

Visitors come seeking the latest in cybersecurity. After all, Israelis came up with the concept of firewall security before hackers even started attacking personal computers.


Gil Shwed, founder and CEO of Check Point Software Technologies, a pioneer of firewall security, speaking at Cybertech 2017. Photo by Gilad Kavalerchik

“There are a lot of global innovators in cybersecurity. But if I were to put a bet on it, I would bet on Israel,” Esti Peshin, director of Cyber Programs at Israel Aerospace Industries, tells ISRAEL21c about where the best new technologies will come from.

Calls for collaboration echoed around the Trade Fairs hall.

Former Mossad senior officer Haim Tomer says “every country has felt the effects of cyber attacks.”

“What you see today is going to get a lot worse in the future if we don’t band together,” Prime Minister Benjamin Netanyahu told conference attendees.

“Terrorist organizations use the same tools we use – against us,” said Netanyahu. “The Internet of Things can be used by these terrorist organizations for dangerous purposes. Unless we work together and cooperate, the future can be very menacing. In this context, Israel, the US and other countries should cooperate at the government level as well as among the industries.”

Nanyang Technological University (NTU) of Singapore and Ben-Gurion University of the Negev (BGU) announced a new collaboration to develop technologies for tackling advanced persistent threats (APTs).

“BGU and NTU recognize the grave necessity of stopping APTs, which are some of the hardest cyber attacks to detect, and have allocated significant funding over two years to develop early detection methods,” said BGU Prof. Dan Blumberg. “Cyber security is a global threat which has become a research topic of increasing interest at BGU and we are pleased to be collaborating with our partners in Singapore to stem the tide.”


Yuval Elovici of Ben-Gurion University’s Cyber Security Research Center speaking with the press at Cybertech 2017. Photo by Viva Sarah Press

Yuval Elovici, head of BGU Cyber Security Research Center, told journalists that the research and patented technology developed at the university are used to create new prevention and detection tools.

Elovici gave an example of how smartwatches can be hacked, and when worn into a secure environment, end up compromising the organization.

“The vulnerabilities are great,” says Elovici, noting his research team is now creating a solution to alert organizations to new devices that enter their secure space. “We’re developing mechanisms so that we can continue to live with IoT and still keep safe.”

At the BGU exhibit area, two prominent examples of research-to-startup success include Morphisec, which is now opening a US office, and Double Octopus, which recently announced a $6 million investment round. Both companies developed cyber security prevention and detection tools based on patented technology originating out of Ben-Gurion University of the Negev.

Israel’s vision some 20 years ago to put cyber on top of the agenda was crucial to the country’s place as a world cybersecurity expert today. To further that vision and to keep Israel’s new generation at the top of the cyber game, Netanyahu announced the creation of a National Center for Cyber Education.

The new center will have a $6 million budget over the next five years, to “increase the number and raise the level of young Israelis for their future integration into the Israeli security services, industry and the academic world.”

Link to original

IN THE PAST two years a group of researchers in Israel has become highly adept at stealing data from air-gapped computers—those machines prized by hackers that, for security reasons, are never connected to the internet or connected to other machines that are connected to the internet, making it difficult to extract data from them.

Mordechai Guri, manager of research and development at the Cyber Security Research Center at Ben-Gurion University, and colleagues at the lab, have previously designed three attacks that use various methods for extracting data from air-gapped machines—methods involving radio waves, electromagnetic waves and the GSM network, and even the heat emitted by computers.

Now the lab’s team has found yet another way to undermine air-gapped systems using little more than the sound emitted by the cooling fans inside computers. Although the technique can only be used to steal a limited amount of data, it’s sufficient to siphon encryption keys and lists of usernames and passwords, as well as small amounts of keylogging histories and documents, from more than two dozen feet away. The researchers, who have described the technical details of the attack in a paper (.pdf), have so far been able to siphon encryption keys and passwords at a rate of 15 to 20 bits per minute—more than 1,200 bits per hour—but are working on methods to accelerate the data extraction.

“We found that if we use two fans concurrently [in the same machine], the CPU and chassis fans, we can double the transmission rates,” says Guri, who conducted the research with colleagues Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici, director of the Telekom Innovation Laboratories at Ben-Gurion University. “And we are working on more techniques to accelerate it and make it much faster.”

The Air-Gap Myth

Air-gapped systems are used in classified military networks, financial institutions and industrial control system environments such as factories and critical infrastructure to protect sensitive data and networks. But such machines aren’t impenetrable. To steal data from them an attacker generally needs physical access to the system—using either removable media like a USB flash drive or a firewire cable connecting the air-gapped system to another computer. But attackers can also use near-physical access using one of the covert methods the Ben-Gurion researchers and others have devised in the past.

We are trying to challenge this assumption that air-gapped systems are secure. MORDECHAI GURI

One of these methods involves using sound waves to steal data. For this reason, many high-security environments not only require sensitive systems be air-gapped, they also require that external and internal speakers on the systems be removed or disabled to create an “audio gap”. But by using a computer’s cooling fans, which also produce sound, the researchers found they were able to bypass even this protection to steal data.

Most computers contain two or more fans—including a CPU fan, a chassis fan, a power supply fan, and a graphics card fan. While operating, the fans generate an acoustic tone known as blade pass frequency that gets louder with speed. The attack involves increasing the speed or frequency of one or more of these fans to transmit the digits of an encryption key or password to a nearby smartphone or computer, with different speeds representing the binary ones and zeroes of the data the attackers want to extract—for their test, the researchers used 1,000 RPM to represent 1, and 1,600 RPM to represent 0.

The attack, like all previous ones the researchers have devised for air-gapped machines, requires the targeted machine first be infected with malware—in this case, the researchers used proof-of-concept malware they created called Fansmitter, which manipulates the speed of a computer’s fans. Getting such malware onto air-gapped machines isn’t an insurmountable problem; real-world attacks like Stuxnet and Agent.btz have shown how sensitive air-gapped machines can be infected via USB drives.

To receive the sound signals emitted from the target machine, an attacker would also need to infect the smartphone of someone working near the machine using malware designed to detect and decode the sound signals as they’re transmitted and then send them to the attacker via SMS, Wi-Fi, or mobile data transfers. The receiver needs to be within eight meters or 26 feet of the targeted machine, so in secure environments where workers aren’t allowed to bring their smartphones, an attacker could instead infect an internet-connected machine that sits in the vicinity of the targeted machine.

Normally, fans operate at between a few hundred RPMs and a few thousand RPMs. To prevent workers in a room from noticing fluctuations in the fan noise, an attacker could use lower frequencies to transmit the data or use what’s known as close frequencies, frequencies that differ only by 100 Hz or so to signify binary 1’s and 0’s. In both cases, the fluctuating speed would simply blend in with the natural background noise of a room.

“The human ear can barely notice [this],” Guri says.

The receiver, however, is much more sensitive and can even pick up the fan signals in a room filled with other noise, like voices and music.

The beauty of the attack is that it will also work with systems that have no acoustic hardware or speakers by design, such as servers, printers, internet of things devices, and industrial control systems.

The attack will even work on multiple infected machines transmitting at once. Guri says the receiver would be able to distinguish signals coming from fans in multiple infected computers simultaneously because the malware on those machines would transmit the signals on different frequencies.

There are methods to mitigate fan attacks—for example, by using software to detect changes in fan speed or hardware devices that monitor sound waves—but the researchers say they can produce false alerts and have other drawbacks.

Guri says they are “trying to challenge this assumption that air-gapped systems are secure,” and are working on still more ways to attack air-gapped machines. They expect to have more research done by the end of the year.

 

Source

FOR YEARS HOLLYWOOD has waged a war on piracy, using digital rights management technologies to fight bootleggers who illegally copy movies and distribute them. For just as long, hackers have found ways to bypass these protections. Now two security researchers have found a new way, using a vulnerability in the system Google uses to stream media through its Chrome browser. They say people could exploit the flaw to save illegal copies of movies they stream on Chrome using sites like Netflix or Amazon Prime.

David Livshits from the Cyber Security Research Center at Ben-Gurion University in Israel and Alexandra Mikityuk with Telekom Innovation Laboratories in Berlin, Germany, alerted Google to the problem on May 24th, but Google has yet to issue a patch. The vulnerability exists in the way Google implements the Widevine EME/CDM technology that Chrome uses to stream encrypted video. The researchers created a proof-of-concept executable file that easily exploits the vulnerability, and produced a brief video to demonstrate it in action.

DRM Hole
The problem is with the implementation of a digital management system called Widevine, which Google owns but did not create. It uses encrypted media extensions to allow the content decryption module in your browser to communicate with the content protection systems of Netflix and other streaming services to deliver their encrypted movies to you. EME handles the key or license exchange between the protection systems of content providers and a CDM component in your browser. When you choose a protected movie to play, the CDM sends a license request to the provider through the EME interface and receives a license in return, which allows the CDM to decrypt the video and send it to your browser player to stream the decrypted content.

A good DRM system should protect that decrypted data and only let you stream the content in your browser, but Google’s system lets you copy it as it streams. The point at which you can hijack the decrypted movie is right after the CDM decrypts the film and is passing it to the player for streaming.

The researchers say the bug is very simple but won’t reveal details about it until at least 90 days after their disclosure to Google, since they don’t want to provide anyone who doesn’t already know about the vulnerability with information that would allow them to steal movies. Ninety days is the minimum that Google’s own security researchers in its Project Zero project give vendors to fix vulnerabilities they uncover before they disclose the bugs publicly.

Livshits and Mikityuk believe the issue can be fixed easily with a Chrome patch. But if Google wants to fix the issue and also mitigate against future vulnerabilities that might be uncovered in its Widevine DRM system, it would need to design the CDM so that it runs inside what’s called a Trusted Execution Environment or TEE. The TEE would act like a protective tunnel so that the decrypted content is written to a protected memory space, preventing someone from hijacking the content as it’s going to the player.

Asked about the vulnerability, a Google spokesman told WIRED that they’re examining the issue closely, but he also downplayed the bug, saying the problem is not exclusive to Chrome and could apply to any browser created from Chromium, the open-source code from which Chrome is derived.

“Chrome has long been an open-source project and developers have been able to create their own versions of the browser that, for example, may use a different CDM or include modified CDM rendering paths,” the spokesman wrote WIRED in an email.

What he meant is that the hijacking problem has long been known and that even if Google were to add code that forces the CDM to operate in a different way, other browsers that developers might compile from the Chromium could eliminate this code, leaving streaming content just as vulnerable and therefore not solving the problem of content hijacking.

The lab researchers say Google’s response is baffling. Just because other developers could produce a different browser that doesn’t incorporate more secure measures, doesn’t mean Google shouldn’t fix the problem in its own Chrome browser.

“[A] vulnerability in the product of Google which is distributed by Google, and users and [movie] studios expect to be secure, should be highly prioritized and fixed to prevent theft of protected content,” says Dudu Mimran, CTO of the lab in Israel where one of the researchers works.

Livshits and Mikityuk found the bug about eight months ago. It’s apparently existed ever since Google embedded the Widevine technology in its browser, though it’s not clear when that occurred. “The way the vulnerability works, it makes sense that it existed from the early days,” says Mimran. The tech giant acquired Widevine in 2010 to secure Chrome streams and premium YouTube channels. Widevine is also embedded in more than 2 billion devices that play protected content, according to its web site.

Firefox and Opera also use the Widevine CDM, though the researchers haven’t examined those browsers yet. They limited their research to the desktop version of Chrome. Neither Safari nor Internet Explorer use Widevine. Safari uses Apple’s FairPlay CDM, and Microsoft’s Internet Explorer and Edge browsers use Microsoft’s PlayReady CDM. The researchers haven’t examined those CDMs yet.

It’s not the first time flaws have been uncovered in a digital rights management system. In 2001 Russian programmer Dmitry Sklyarov discovered vulnerabilities in the encryption system Adobe used for protecting electronic books produced with Adobe Acrobat. That same year a group of researchers found flaws in the digital watermarking technology created by the Secure Digital Music Initiative, a consortium of recording companies and consumer electronics firms, to thwart piracy.

But the Chrome vulnerability is different in that it involves a third-party system that streamers are trusting to protect their valuable content.

“The simplicity of stealing protected content with our approach poses a serious risk for Hollywood [studios] which rely on such technologies to protect their assets,” Livshits says. Though the researchers have no way of knowing if this hole has been used in the real world, it shows that the battle to fight piracy continues on ever shifting territory.

 

Source

CAUTIOUS COMPUTER USERS put a piece of tape over their webcam. Truly paranoid ones worry about their devices’ microphones—some even crack open their computers and phones to disable or remove those audio components so they can’t be hijacked by hackers. Now one group of Israeli researchers has taken that game of spy-versus-spy paranoia a step further, with malware that converts your headphones into makeshift microphones that can slyly record your conversations.

Researchers at Israel’s Ben Gurion University have created a piece of proof-of-concept code they call “Speake(a)r,” designed to demonstrate how determined hackers could find a way to surreptitiously hijack a computer to record audio even when the device’s microphones have been entirely removed or disabled. The experimental malware instead repurposes the speakers in earbuds or headphones to use them as microphones, converting the vibrations in air into electromagnetic signals to clearly capture audio from across a room.

“People don’t think about this privacy vulnerability,” says Mordechai Guri, the research lead of Ben Gurion’s Cyber Security Research Labs. “Even if you remove your computer’s microphone, if you use headphones you can be recorded.”

It’s no surprise that earbuds can function as microphones in a pinch, as dozens of how-to YouTube videos demonstrate. Just as the speakers in headphones turn electromagnetic signals into sound waves through a membrane’s vibrations, those membranes can also work in reverse, picking up sound vibrations and converting them back to electromagnetic signals. (Plug a pair of mic-less headphones into an audio input jack on your computer to try it.)

But the Ben Gurion researchers took that hack a step further. Their malware uses a little-known feature of RealTek audio codec chips to silently “retask” the computer’s output channel as an input channel, allowing the malware to record audio even when the headphones remain connected into an output-only jack and don’t even have a microphone channel on their plug. The researchers say the RealTek chips are so common that the attack works on practically any desktop computer, whether it runs Windows or MacOS, and most laptops, too. RealTek didn’t immediately respond to WIRED’s request for comment on the Ben Gurion researchers’ work. “This is the real vulnerability,” says Guri. “It’s what makes almost every computer today vulnerable to this type of attack.”

To be fair, the eavesdropping attack should only matter to those who have already gone a few steps down the rabbit-hole of obsessive counter-intelligence measures. But in the modern age of cybersecurity, fears of having your computer’s mic surreptitiously activated by stealthy malware are increasingly mainstream: Guri points to the photo that revealed earlier this year that Mark Zuckerberg had put tape over his laptop’s microphone. In a video for Vice News, Edward Snowden demonstrated how to remove the internal mic from a smartphone. Even the NSA’s information assurance division suggests “hardening” PCs by disabling their microphones, and repair-oriented site iFixit’s Kyle Wiens showed MacWorld in July how to physically disable a Macbook mic. None of those techniques—short of disabling all audio input and output from a computer—would defeat this new malware. (Guri says his team has so far focused on using the vulnerability in RealTek chips to attack PCs, though. They have yet to determine which other audio codec chips and smartphones might be vulnerable to the attack, but believe other chips and devices are likely also susceptible.)

In their tests, the researchers tried the audio hack with a pair of Sennheiser headphones. They found that they could record from as far as 20 feet away—and even compress the resulting recording and send it over the internet, as a hacker would—and still distinguish the words spoken by a male voice. “It’s very effective,” says Guri. “Your headphones do make a good, quality microphone.”

There’s no simple software patch for the eavesdropping attack, Guri says. The property of RealTek’s audio codec chips that allows a program to switch an output channel to an input isn’t an accidental bug so much as a dangerous feature, Guri says, and one that can’t be easily fixed without redesigning and replacing the chip in future computers.

Until then, paranoiacs take note: If determined hackers are out to bug your conversations, all your careful microphone removal surgery isn’t quite enough—you’ll also need to unplug that pair of cheap listening devices hanging around your neck.

 

Source

Because computers may contain or interact with sensitive information, they are often air-gapped and in this way kept isolated and disconnected from the Internet. In recent years the ability of malware to communicate over an air-gap by transmitting sonic and ultrasonic signals from a computer speaker to a nearby receiver has been shown. In order to eliminate such acoustic channels, current best practice recommends the elimination of speakers (internal or external) in secure computers, thereby creating a so-called ‘audio-gap’.
In this work, we present Fansmitter, a malware that can acoustically exfiltrate data from air-gapped computers, even when audio hardware and speakers are not present. Our method utilizes the noise emitted from the CPU and chassis fans which are present in virtually every computer today. We show that a software can regulate the internal fans’ speed in order to control the acoustic waveform emitted from a computer. Binary data can be modulated and transmitted over these audio signals to a remote microphone (e.g., on a nearby mobile phone). We present Fansmitter’s design considerations, including acoustic signature analysis, data modulation, and data transmission. We also evaluate the acoustic channel, present our results, and discuss countermeasures. Using our method we successfully transmitted data from air-gapped computer without audio hardware, to a smartphone receiver in the same room. We demonstrated the effective transmission of encryption keys and passwords from a distance of zero to eight meters, with bit rate of up to 900 bits/hour. We show that our method can also be used to leak data from different types of IT equipment, embedded systems, and IoT devices that have no audio hardware, but contain fans of various types and sizes.

DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise. By Security Researcher Mordechai Guri and Yosef Solewicz, Andrey Daidakulov and Yuval Elovici

Link: https://arxiv.org/abs/1608.03431

Air-gapped computers are disconnected from the Internet physically and logically. This measure is taken in order to prevent the leakage of sensitive data from secured networks. In the past, it has been shown that malware can exfiltrate data from air-gapped computers by transmitting ultrasonic signals via the computer’s speakers. However, such acoustic communication relies on the availability of speakers on a computer.

 

In this work, we present ‘DiskFiltration,’ a covert channel which facilitates the leakage of data from an air-gapped compute via acoustic signals emitted from its hard disk drive (HDD). Our method is unique in that, unlike other acoustic covert channels, it doesn’t require the presence of speakers or audio hardware in the air-gapped computer. A malware installed on a compromised machine can generate acoustic emissions at specific audio frequencies by controlling the movements of the HDD’s actuator arm. Digital Information can be modulated over the acoustic signals and then be picked up by a nearby receiver (e.g., smartphone, smartwatch, laptop, etc.). We examine the HDD anatomy and analyze its acoustical characteristics. We also present signal generation and detection, and data modulation and demodulation algorithms. Based on our proposed method, we developed a transmitter on a personal computer and a receiver on a smartphone, and we provide the design and implementation details. We also evaluate our covert channel on various types of internal and external HDDs in different computer chassis and at various distances. With DiskFiltration we were able to covertly transmit data (e.g., passwords, encryption keys, and keylogging data) between air-gapped computers to a smartphone at an effective bit rate of 180 bits/minute (10,800 bits/hour) and a distance of up to two meters (six feet).

About Us

Cyber@BGU is an umbrella organization at Ben Gurion University, being home to various cyber security, big data analytics and AI applied research activities. Residing in newly established R&D center at the new Hi-Tech park of Beer Sheva (Israel's Cyber Capital), Cyber@BGU serves as a platform for the most innovative and technologically challenging projects with various industrial and governmental partners.

Latest Publications

Learning Software Behavior for Automated Diagnosis

Ori Bar-Ilan, Roni Stern and Meir Kalech

The Twenty Seventh International Workshop on Principles of Diagnosis (DX-17), 2017

Learning Software Behavior for Automated Diagnosis

Ori Bar-Ilan, Roni Stern and Meir Kalech

The Twenty Seventh International Workshop on Principles of Diagnosis (DX-17), 2017

Software diagnosis algorithms aim to identify the faulty
software components that caused a failure. A key challenges
of existing software diagnosis algorithms is how to prioritize
the outputted diagnoses. To do so, previous work proposed
a method for estimating the likelihood that each diagnosis
is correct. Computing these diagnosis likelihoods is nontrivial.
We propose to do this by learning a behavior model
of the software components and use it to identify abnormally
behaving components. In this work we show the potential
of such an approach by performing an empirical evaluation
on a synthetic behavior model of the components. The results
show that even an imperfect behavior model is useful
in improving diagnosis accuracy and minimizing wasted
troubleshooting efforts.

Download
PDF
In collaboration with IDC Herzliya + Technion and UCLA

Group-Based Secure Computation: Optimizing Rounds, Communication, and Computation

Elette Boyle, Niv Gilboa and Yuval Ishai

Advances in Cryptology - EUROCRYPT 2017, pages 163-193, 2017

In collaboration with IDC Herzliya + Technion and UCLA

Group-Based Secure Computation: Optimizing Rounds, Communication, and Computation

Elette Boyle, Niv Gilboa and Yuval Ishai

Advances in Cryptology - EUROCRYPT 2017, pages 163-193, 2017

A recent work of Boyle et al. (Crypto 2016) suggests that
“group-based” cryptographic protocols, namely ones that only rely on a cryptographically hard (Abelian) group, can be surprisingly powerful. In particular, they present succinct two-party protocols for securely computing
branching programs and NC1 circuits under the DDH assumption,
providing the first alternative to fully homomorphic encryption.
In this work we further explore the power of group-based secure computation
protocols, improving both their asymptotic and concrete effi-
ciency. We obtain the following results.
– Black-box use of group. We modify the succinct protocols of Boyle
et al. so that they only make a black-box use of the underlying group,
eliminating an expensive non-black-box setup phase.
– Round complexity. For any constant number of parties, we obtain
2-round MPC protocols based on a PKI setup under the DDH assumption.
Prior to our work, such protocols were only known using fully
homomorphic encryption or indistinguishability obfuscation.
– Communication complexity. Under DDH, we present a secure 2-
party protocol for any NC1 or log-space computation with n input bits
and m output bits using n + (1 + o(1))m + poly(λ) bits of communication,
where λ is a security parameter. In particular, our protocol
can generate n instances of bit-oblivious-transfer using (4 + o(1)) · n
bits of communication. This gives the first constant-rate OT protocol
under DDH.
– Computation complexity. We present several techniques for
improving the computational cost of the share conversion procedure of
Boyle et al., improving the concrete efficiency of group-based protocols
by several orders of magnitude.

Download
PDF
In collaboration with Technion and UCLA

Ad Hoc PSM Protocols: Secure Computation Without Coordination

Amos Beimel, Yuval Ishai, Eyal Kushilevitz

Advances in Cryptology - EUROCRYPT 2017, pages 580-608, 2017

In collaboration with Technion and UCLA

Ad Hoc PSM Protocols: Secure Computation Without Coordination

Amos Beimel, Yuval Ishai, Eyal Kushilevitz

Advances in Cryptology - EUROCRYPT 2017, pages 580-608, 2017

We study the notion of ad hoc secure computation, recently introduced by Beimel et al. (ITCS 2016), in the context of the Private Simultaneous Messages (PSM) model of Feige et al. (STOC 2004). In ad hoc secure computation we have n parties that may potentially participate in a protocol but, at the actual time of execution, only k of them, whose identity is not known in advance, actually participate. This situation is particularly challenging in the PSM setting, where protocols are non-interactive (a single message from each participating party to a special output party) and where the parties rely on pre-distributed, correlated randomness (that in the ad-hoc setting will have to take into account all possible sets of participants).

We present several different constructions of ad hoc PSM protocols from standard PSM protocols. These constructions imply, in particular, that efficient information-theoretic ad hoc PSM protocols exist for NC 11 and different classes of log-space computation, and efficient computationally-secure ad hoc PSM protocols for polynomial-time computable functions can be based on a one-way function. As an application, we obtain an information-theoretic implementation of order-revealing encryption whose security holds for two messages.

We also consider the case where the actual number of participating parties t may be larger than the minimal k for which the protocol is designed to work. In this case, it is unavoidable that the output party learns the output corresponding to each subset of k out of the t participants. Therefore, a “best possible security” notion, requiring that this will be the only information that the output party learns, is needed. We present connections between this notion and the previously studied notion of t-robust PSM (also known as “non-interactive MPC”). We show that constructions in this setting for even simple functions (like AND or threshold) can be translated into non-trivial instances of program obfuscation (such as point function obfuscation and fuzzy point function obfuscation, respectively). We view these results as a negative indication that protocols with “best possible security” are impossible to realize efficiently in the information-theoretic setting or require strong assumptions in the computational setting.

Download
PDF
In collaboration with IBM

Supervised Detection of Infected Machines Using Anti-virus Induced Labels

Tomer Cohen, Danny Hendler and Dennis Potashnik

CSCML 2017, pages 211-220

In collaboration with IBM

Supervised Detection of Infected Machines Using Anti-virus Induced Labels

Tomer Cohen, Danny Hendler and Dennis Potashnik

CSCML 2017, pages 211-220

Traditional antivirus software relies on signatures to uniquely identify malicious files. Malware writers, on the other hand, have responded by developing obfuscation techniques with the goal of evading content-based detection. A consequence of this arms race is that numerous new malware instances are generated every day, thus limiting the effectiveness of static detection approaches. For effective and timely malware detection, signature-based mechanisms must be augmented with detection approaches that are harder to evade.

We introduce a novel detector that uses the information gathered by IBM’s QRadar SIEM (Security Information and Event Management) system and leverages anti-virus reports for automatically generating a labelled training set for identifying malware. Using this training set, our detector is able to automatically detect complex and dynamic patterns of suspicious machine behavior and issue high-quality security alerts. We believe that our approach can be used for providing a detection scheme that complements signature-based detection and is harder to circumvent.

Download
PDF