strategical
tactical
homeland
cyber

A new way to train AI systems will help protect them from hackers

9f294474c04009cb16f83ed43db0c0ec.jpg

One of the biggest unsolved problems of deep learning is its vulnerability to so-called “adversarial attacks”. Adding random or hidden to the human eye information to the input of the AI ​​system can lead to a malfunction. Most competitive research focuses on image recognition systems, but deep-learning image reconstruction systems are also vulnerable. This poses a great danger in the healthcare sector, where such systems are often used to recover medical images, such as computed tomography or MRI, from x-ray data. For example, a targeted attack can cause the system to display the tumor where it should not be.

Specialists from the University of Illinois at Urbana-Champaign have proposed a new method for training deep neural networks that provides fewer errors and improves the reliability of such systems in critical situations.

The method involves comparing the neural networks responsible for reconstructing images and networks generating examples of competitive attacks, according to the type of GAN algorithms. Through repeated cycles, the adversarial network will try to trick the network responsible for reconstructing the images so that it generates elements that are not part of the original data. In turn, the reconstruction network will be constantly modified so as not to be fooled, thereby increasing its reliability.

Researchers tested development on a number of popular image sets. Although in terms of restoring the original data, the network they trained turned out to be more effective than other "fault-tolerant" systems, it still needs to be improved.

Experts have repeatedly managed to trick AI with the help of so-called "competitive attacks." Recall that last February, security researchers from McAfee Advanced Threat Research demonstrated a cyber attack on several Tesla machines, as a result of which vehicles erroneously accelerated from 56 km / h to 136 km / h. Specialists were able to trick the MobilEye EyeQ3 car camera system by slightly changing the sign of the speed limit on the side of the road so that the driver did not suspect anything.

Source: https://www.securitylab.ru

All News

Scroll top