Cybersecurity Risks in ML

Cybersecurity Risks in ML

As machine learning continues to develop and grow, it is concurrently reshaping industries and the world. Now we have two Nobel prize winners who are AI pioneers (Demis Hassabis and John Jumper). It is critical to understand its possible security vulnerabilities. ML applications are game-changing, but their development brings additional vectors of attack. Here are some of the pressing threats machine learning systems encounter.

Model Extraction Attacks: In model extraction attacks, an attacker can “take away” the machine learning models by simply posing queries and observing the responses. This circumvents the purpose of confidentiality and includes even the model along with all its parameters and the training data.

  • A real world example: Consider how this technique can be used to disable a spam filter. Attacker could design spam emails that bypass detection.
  • Dangerous data leakage risk: These types of attacks can also leak the sensitive attributes of the training data, e.g. specific flags or individual data points.
  • At risk platforms: Such model extraction incurs the highest cases within ML-as-a-service platforms, which offer models via public application programming interfaces easily accessible via public APIs.

Data Poisoning Attacks

In data poisoning attacks, an adversary does “poisoning” to the training data set by putting in wrong, malicious samples, so that the model learns wrong conduct or a bias in its production. 

  • A real world example: In healthcare, by poisoning a medical diagnosis model, might lead the healthcare provider to misdiagnose a given patient
  • Online learning risk: It becomes increasingly difficult to defend and mitigate from these attacks since they target models that are constantly updated and learn from new data. 

Evasion Attacks (Adversarial Examples)

Evasion attacks are generally known as adversarial machine learning attacks which compromise model performance and outcomes by crafting inputs that would enable the machine learning models to make wrong predictions.

  • A real world example: An image may be photoshopped to represent a stop sign but visually, it is so different that a self-driving car interprets it as a ‘yield’ sign.
  • State of the art threat: Evasion attacks have been proven to be successful and easy to implement, even with deep learning models that are considered to be state of the art. Robust defenses are an ongoing challenge

These are just a few examples of the vulnerabilities machine learning systems face today. The next update will discuss privacy risks.

#cybersecurity #ML #AI

To view or add a comment, sign in

More articles by Jayesh D.

Others also viewed

Explore content categories