Tesla Autopilot Hack Raises New Self-Driving Car Concerns

Tesla's Autopilot could be misled by rogue markings on the road, security researchers have found, though the EV-maker insists that it's no concern if the driver-assistance technology is used correctly. Although focusing on Tesla's systems, the investigation raises further questions about the resilience of systems that will form the basis of truly autonomous cars in general, when such vehicles become available.

Advertisement

It's the handiwork of the Tencent Keen Security Lab, which had previously demonstrated an exploit – since fixed by Tesla – by which Autopilot could be remotely compromised. In a new project, it explored ways by which the cars' computer vision systems could be fooled, with varying degrees of risk as a result.

In one exploit, the automatic windshield wipers of a Tesla Model S 75 running Autopilot hardware version 2.5 (and 2018.6.1 software) could be persuaded to activate using a so-called "adversarial" image. Tesla uses a neural network model to trigger the wipers, identifying raindrops building up on the glass. By building a special image, designed to confuse the neural networks' prior learnings, that system could be confused.

Advertisement

To be fair, and as Tesla itself points out, some of the exploits may exist in theory but present little in the way of real-world risk. Fooling the automatic wipers, for example, requires putting a TV right in front of the car. That's unlikely to be a situation Tesla drivers find themselves in.

Another hack, which gained access to the car's steering system meanwhile, was addressed in a security update, Tesla says. Released as two components, in 2017 and 2018, it apparently patched the "primary vulnerability" which was exploited, the automaker says. "In the many years that we have had cars on the road, we have never seen a single customer ever affected by any of the research in this report."

As cars get smarter, the driver's job evolves

Tesla also commented on what's arguably the most real-world-applicable exploit, where lane-recognition was fooled by specific markings on the ground. There, patches of tape placed on the road were sufficient to confuse the neural network, either by masking a road lane marking completely, or by introducing a fake, non-existent lane.

Advertisement

"In this demonstration the researchers adjusted the physical environment (e.g. placing tape on the road or altering lane lines) around the vehicle to make the car behave differently when Autopilot is in use," Tesla said in response to the lane-recognition research. "This is not a real-world concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should be prepared to do so at all times."

Tesla's response to the lane-recognition exploit once again puts the reality of Autopilot and how some mischaracterize it under the microscope. Despite ambitious promises by the automaker that, in the not too distant future, your Tesla will be able to drive you entirely autonomously, Autopilot as it is today remains resolutely a Level 2 system. That means driver assistance, and that the person behind the wheel is still responsible for making sure the car is being driven safely, whatever technologies are also involved.

Advertisement

Just how attentive any driver making use of features like adaptive cruise control and lane-keeping is, compared to someone operating their vehicle entirely manually, remains an unanswered question. What seems clear, though, is that the potential hazards such drivers must be attuned to is evolving. The version of the reality that the car's systems see, and base its decisions upon, and the version the human operator sees can differ significantly, whether that's tape on the road or something else.

Which sensors are essential for autonomous driving?

The research – and specifically its commentary on computer-vision – highlights another lingering question around autonomous driving. Tesla is on record as insisting that the current array of sensors on its cars should be sufficient for Level 4 or Level 5 self-driving. At that point, the human behind the wheel should be able to leave safe operation of the car entirely to the vehicle's systems.

Advertisement

Notably, though, no current Tesla has a LIDAR, or laser range-detector. Instead the automaker uses cameras, radar, and ultrasonic sensors. Experts in the segment are divided over whether that's going to be sufficient for truly autonomous driving, and this research adds to the existing concerns that computer vision alone may not be enough.

"Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results," the Tencent researchers conclude. "Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident."

Advertisement

Of course, with truly self-driving cars still likely to be years away from reaching the market, Tesla – and other automakers – still have time to work on the issue. All the same, the EV-maker is unique in offering vehicles with a paid fully-self-driving option, even though it's not active today. That makes it all the more vital that Tesla figures out the computer vision question, lest it frustrate early-adopters who have already paid for a feature they've been promised.

Recommended

Advertisement