How a $300 projector can fool Tesla’s Autopilot

Semi-autonomous driving systems don’t understand projected images.

Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.

The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driver—in fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.

Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called “Phantom of the ADAS,” and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don’t necessarily agree with the spin Nassi puts on his work—for the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.

Nassi and his team’s spoofing of the Model X was carried out with a human assistant holding a projector, due to drone laws in the country where the experiments were carried out. But the spoof could have also been carried out by drone, as his earlier spoofing attacks on a Mobileye driver-assistance system were.

From a security perspective, the interesting angle here is that the attacker never has to be at the scene of the attack and doesn’t need to leave any evidence behind—and the attacker doesn’t need much technical expertise. A teenager with a $400 drone and a battery-powered projector could reasonably pull this off with no more know-how than “hey, it’d be hilarious to troll cars down at the highway, right?” The equipment doesn’t need to be expensive or fancy—Nassi’s team used several $200-$300 projectors successfully, one of which was rated for only 854×480 resolution and 100 lumens.

https://youtu.be/1cSw4fXYqWI

Of course, nobody should be letting a Tesla drive itself unsupervised in the first place—Autopilot is a Level 2 Driver Assistance System, not the controller for a fully autonomous vehicle. Although Tesla did not respond to requests for comment on the record, the company’s press kit describes Autopilot very clearly (emphasis ours):

Autopilot is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time. While Autopilot is designed to become more capable over time, in its current form, it is not a self-driving system, it does not turn a Tesla into an autonomous vehicle, and it does not allow the driver to abdicate responsibility. When used properly, Autopilot reduces a driver’s overall workload, and the redundancy of eight external cameras, radar and 12 ultrasonic sensors provides an additional layer of safety that two eyes alone would not have.

Even the name “Autopilot” itself isn’t as inappropriate as many people assume—at least, not if one understands the reality of modern aviation and maritime autopilot systems in the first place. Wikipedia references the FAA’s Advanced Avionics Handbook when it defines autopilots as “systems that do not replace human operators, [but] instead assist them in controlling the vehicle.” On the first page of the Advanced Avionics Handbook’s chapter on automated flight control, it states: “In addition to learning how to use the autopilot, you must also learn when to use it and when not to use it.”

Within these constraints, even the worst of the responses demonstrated in Nassi’s video—that of the Model X swerving to follow fake lane markers on the road—doesn’t seem so bad. In fact, that clip demonstrates exactly what should happen: the owner of the Model X—concerned about what the heck his or her expensive car might do—hit the brakes and took control manually after Autopilot went in an unsafe direction.

The problem is, there’s good reason to believe that far too many drivers don’t believe they really need to pay attention. A 2019 survey demonstrated that nearly half of the drivers polled believed it was safe to take their hands off the wheel while Autopilot is on, and six percent even thought it was OK to take a nap. More recently, Sen. Edward Markey (D-Mass.) called for Tesla to improve the clarity of its marketing and documentation, and Democratic presidential candidate Andrew Yang went hands-free in a campaign ad—just as Elon Musk did before him, in a 2018 60 Minutes segment.

The time may have come to consider legislation about drones and projectors specifically, in much the same way laser pointers were regulated after they became popular and cheap. Some of the techniques used in the spoofing attacks carried out here could also confuse human drivers. And although human drivers are at least theoretically available, alert, and ready to take over for any confused AI system today, that won’t be the case forever. It would be a good idea to start work on regulations prohibiting spoofing of vehicle sensors before we no longer have humans backing them up.

Source: Ars Technica