# Autonomous Vehicle, HD Maps and it's Security Autonomous vehicle is a hot topic and is being researched a lot recently. Many researchers and companies are investigating deep learning models so that they can properly control the vehicle from the data comes from camera, sensors, GPS and so on. ![](https://i.imgur.com/xPbvQCa.png) (ref: https://arxiv.org/abs/2104.01789) Of course, its security is also important to prevent the vehicle accident from malicious attacker. From a security perspective, camera and the sensors equipped with vehicle can be called as an attack surface. Also, cloud service that is connected from vehicles can be an attack surface. ## HD map These vehicles often connect to the cloud service so they can download HD map(high-definition map). HD map is a map that includes information like road lanes, signs and obstacles. Vehicles can use those data to plan a route and enhance the perception of the surrounding environment. Therefore, even if the vehicle sensors are attacked by adversarial attacks such as optical adversarial attack (https://arxiv.org/abs/2108.06247v2), it may be theoritically possible to detect and mitigate those attacks with HD map. By the way, how about the security of HD map? Vehicles upload the perception data they get from a real environment to the cloud server via Vehicle to Everything (V2X) service so that they can keep the HD map up-to-date. (ref: https://arxiv.org/abs/2104.01789) It means, attackers can also download and upload fake data by camouflaging theirselves as an vehicle. They might be able to monitor each vehicle's movement and make the route different way by uploading fake data. Although it may be hard to connect to the cloud services directly, if the attackers can intercept the network connnection between some vehicles and cloud service, they might be able to get, edit or block the data on the network. ## Attacks against maps In the paper ["Exploiting Social Navigation"](https://arxiv.org/abs/1410.0151), they present Sybil attack against social navigation application WAZE. They created 15 bot drivers account. Since the reports from higher level drivers can influence the data, they started driving emulation and trained their account for several hours. After the training, they sent a fake driving data that their bot drives with 70kph and then, they slowdown the speed. Finally it succeeded and WAZE considered there is a traffic jam. Therefore they could influence the route recommended to the user. ![](https://i.imgur.com/rKRgnPF.png) (ref: https://arxiv.org/abs/1410.0151) In this paper, they were also able to track other users and report fake obstacles. They shows some ways to mitigating those attacks. One is verifing drivers via carrier data. They disccussed that they can verify user with social account, phone number or location of the cellular antenna. Another mitigation is to verify drivers by their behavior like CAPTCHA, network traffic analysis or analizing their report pattern. But they said conducting those verification is so complicated and requires constant maintenance. ## Coclusion Autonomous vehicles are relying on their sensors so researchers created several ways to attack those vehicles via sensors. We think it is possible use HD maps to mitigate those attacks. HD maps are often generated from real environment data collected from vehicles and located on the cloud service, so we have to take care of those data not to be manipulated by malicious attackers.