Attacks on AV sensors are popular at regional conferences and at the fountain of knowledge (aka
DEFCON). The different attacks are always interesting (e.g., SQL injection to gain access, creating a fake
object point in LiDAR, and others). Earlier this year a new attack emerged. The MadRadar is designed to
mask actual objects from the sensor or create fake/phantom objects for the sensor. Specifically, the
attack can provide the vehicle’s sensors with false positives, false negatives, and translation attacks.
The attack, created by a team at Duke University, is agnostic and may be used against any vehicle’s radar
system, making this exceptionally useful. The demonstration for this has shown the radar being tricked
into detecting a vehicle driving towards it instead of the vehicle driving away from the targeted vehicle.
Other demonstrations have created a vehicle where there was none. The attack is also flexible and can
adjust to different types of radar. This is done via the tool learning about the radar from the transmitted
signals. This ability to adjust is based on the target radar’s bandwidth, chirp time, and frame times.
This would be a viable attack against the different functions for vehicles using radar, as with adaptive
cruise control. With vehicles depending on sensors for autonomous driving, this is especially
problematic. This may also effect park assist, blind spot detection, and rear collision warming.
While this is from researchers, this is a viable attack to complicate AV operations.
No comments:
Post a Comment