Cybersecurity Threats Loom Over Endpoint AI Systems

By- admin 2 years ago
Technology
Blogger image

With endpoint AI (or TinyML) in its infancy stage and slowly getting adopted by the industry, more companies are incorporating AI into their systems for predictive maintenance purposes in factories or even keyword spotting in consumer electronics. But with the addition of an AI component into your IoT system, new security measures must be considered.

IoT has matured to an extent where you can reliably release products into the field with peace of mind, with certifications that provide assurance that your IP can be secured through a variety of techniques, such as isolated security engines, secure cryptographic key storage, and Arm TrustZone usage. Such assurances can be found on microcontrollers (MCUs) designed with scalable hardware-based security features. The addition of AI, however, leads to the introduction of new threats that infest themselves into secure areas—namely in the form of adversarial attacks.

Adversarial attacks target the complexity of deep learning models and the underlying statistical mathematics to create weaknesses and exploit them in the field, leading to parts of the model or training data being leaked, or outputting unexpected results. This is due to the black-box nature of deep neural networks (DNN), where the decision-making in DNNs is not transparent (i.e., the presence of “hidden layers” and customers are unwilling to risk their systems with the addition of an AI feature, slowing AI proliferation to the endpoint).

 

Adversarial attacks are different than conventional cyberattacks as when traditional cyber security threats occur, security analysts can patch the bug in the source code and document it extensively. Considering there is no specific line of code you can address in a DNN, it becomes understandably difficult.

Notable examples of adversarial attacks can be found throughout many applications, such as when a team of researchers, led by Kevin Eykholt, tapped stickers onto stop signs, which caused an AI application to predict it as a speed sign. Such misclassification can lead to traffic accidents and more public distrust in using AI in systems.

 

The researchers managed to get 100% misclassification in a lab setting and 84.8% in field tests, proving that the stickers were quite effective. The algorithms fooled were based on convolution neural networks (CNN), so it can be extended to other use cases using CNN as a base, such as object detection and keyword spotting.

JustDone

is a trusted & secured community platform that connects people who need to get "small non skilled works" done with nearby people who are looking to earn money and ready to complete task instantly.

SIGN UP TO OUR NEWSLETTER