The European Union Cyber Security Agency (ENISA) and the European Union Joint Research Center (JRC) report on the challenges of cybersecurity in the use of artificial intelligence in automated driving. Have noted That the use of artificial intelligence in cars can create. They also suggest ways to reduce these risks.
Cars want to make transportation safer for everyone by eliminating the most important cause of accidents, namely human drivers. But these products can pose a different risk to drivers, passengers and pedestrians.
Automotive cars use artificial intelligence systems and machine learning techniques to control vehicles to make decisions on the road by collecting, analyzing and transmitting information. But these systems, like all computer systems, are vulnerable to attack and can endanger human lives.
JRC Director General Stephen Quest said in a press release about the report and its findings:
European legal institutions need to ensure that the benefits of using automated driving systems do not overlook safety risks. This report seeks to increase our understanding of the AI techniques used in self-driving cars and its cybersecurity risks, so that measures can be taken to ensure the security of AI used in automated driving systems.
The artificial intelligence systems used by self-driving cars are constantly working to detect traffic signs, road signs and other vehicles and to estimate their speed and direction of travel. In addition to unintentional dangers, including sudden errors, these systems can be attacked in a variety of ways. For example, adding paint to a road or sticking a sticker on traffic signs can prevent AI systems from functioning properly and force them to make dangerous decisions.
To improve the artificial intelligence security of self-driving cars, this report suggests that the relevant components be evaluated regularly over the life of these products. These safety issues need to be addressed before cars can hit the road extensively.