Security in AI
Since the hype about ChatGPT, I see a lot of questions arise around security in AI. That such a system can and will be abused to support malicious activities is bound to happen. It is the nature of humans (at least a subset of humans). But AI offers a lot of upsides we should leverage.
The question for ordinary security people like me is more how can we understand, assess and mitigate risks
At Microsoft we developed Responsible AI principles a while ago and are enforcing them in the company. Additionally we have people doing AI red teaming to understand how our AI models could be manipulated.
In this context, there are two blogs, which you could be interested in as well. The first one is Best practices for AI security risk management linking you to an AI risk management framework, which I feel is interesting to look at. Especially if you are coming from the “classical” Cybersecurity, you will realize that a lot of good practices still apply (obviously). Controls like in IAM will not go away. How do you protect the developer, the code, the production environment etc.
There are a few points, which I feel are important, nevertheless:
Recommended by LinkedIn
Reading the AI Security Risk Management, which is linked in the blog (it has 20 pages) is definitely worth it.
In a second blog (AI security risk assessment using Counterfit), we talk about Counterfit, a tool we open-sourced to automate security testing
I feel that this is a a good starting point for security people wanting to learn AI – or the risks of AI.
This is very interesting. AI security and ethics are both necessary.
Thanks for sharing Roger, great read.
Good to know that Microsoft is enforcing responsible AI principles.
Thanks for sharing this Roger!
Thanks Roger! Interesting and useful.