Cybersecurity Risks in ML
As machine learning continues to develop and grow, it is concurrently reshaping industries and the world. Now we have two Nobel prize winners who are AI pioneers (Demis Hassabis and John Jumper). It is critical to understand its possible security vulnerabilities. ML applications are game-changing, but their development brings additional vectors of attack. Here are some of the pressing threats machine learning systems encounter.
Model Extraction Attacks: In model extraction attacks, an attacker can “take away” the machine learning models by simply posing queries and observing the responses. This circumvents the purpose of confidentiality and includes even the model along with all its parameters and the training data.
Data Poisoning Attacks
In data poisoning attacks, an adversary does “poisoning” to the training data set by putting in wrong, malicious samples, so that the model learns wrong conduct or a bias in its production.
Recommended by LinkedIn
Evasion Attacks (Adversarial Examples)
Evasion attacks are generally known as adversarial machine learning attacks which compromise model performance and outcomes by crafting inputs that would enable the machine learning models to make wrong predictions.
These are just a few examples of the vulnerabilities machine learning systems face today. The next update will discuss privacy risks.
#cybersecurity #ML #AI
Link to new article on Privacy risks in ML https://www.garudax.id/pulse/privacy-threats-machine-learning-jayesh-daga-nj5mf