Security in AI

Security in AI

Since the hype about ChatGPT, I see a lot of questions arise around security in AI. That such a system can and will be abused to support malicious activities is bound to happen. It is the nature of humans (at least a subset of humans). But AI offers a lot of upsides we should leverage.

The question for ordinary security people like me is more how can we understand, assess and mitigate risks to support the business to leverage the upsides. I learned a lot about AI in recent years, but I still feel insecure in some areas how to act on that from a risk and resilience perspective.

At Microsoft we developed Responsible AI principles a while ago and are enforcing them in the company. Additionally we have people doing AI red teaming to understand how our AI models could be manipulated.

In this context, there are two blogs, which you could be interested in as well. The first one is Best practices for AI security risk management linking you to an AI risk management framework, which I feel is interesting to look at. Especially if you are coming from the “classical” Cybersecurity, you will realize that a lot of good practices still apply (obviously). Controls like in IAM will not go away. How do you protect the developer, the code, the production environment etc.

There are a few points, which I feel are important, nevertheless:

  • Protect and govern training and test data: Interestingly, there are two facets to that. Often, when talking to customers they look at the confidentiality of data. While this is important with the training and test data for AI as well, integrity of the data plays an important role. This does encompass altering of existing data but the trust in the data source as well.
  • Model development: On the one hand side, this is a classical way of controlling a development process but there is way more. You need a threat model on the AI model, you need to think about how a bad person could abuse it. As always with such models, you need to understand the underlying technology, you need to understand AI and AI models to do that.
  • Model Deployment: The same seems true to me when we look at security and compliance reviews of AI models. I am in security since a long time, but I am not sure, whether I could ask the right questions in such a review (and I feel that a checklist might not help there). Actually, I am sure that I could not… You need an AI security person to do these reviews.

Reading the AI Security Risk Management, which is linked in the blog (it has 20 pages) is definitely worth it.

In a second blog (AI security risk assessment using Counterfit), we talk about Counterfit, a tool we open-sourced to automate security testing of AI systems.

I feel that this is a a good starting point for security people wanting to learn AI – or the risks of AI.

This is very interesting. AI security and ethics are both necessary.

Like
Reply

Thanks for sharing Roger, great read.

Like
Reply

Good to know that Microsoft is enforcing responsible AI principles.

Like
Reply

Thanks for sharing this Roger!

Like
Reply

Thanks Roger! Interesting and useful.

Like
Reply

To view or add a comment, sign in

More articles by Roger Halbheer

  • 10'002 Days in (Cyber)-Security

    I planned to publish this blog last Saturday, but delayed due to the nice weekend. Now it's 10,002 days instead of…

    16 Comments
  • AI as a Double-Edged Sword: Why Boards Must Act Now

    Author: Roger Halbheer, Chief Security Advisor, Microsoft What if the same AI technology driving your business becomes…

    3 Comments
  • Risks of Artificial Intelligence

    I know that this is a long article, but I hope it is worth your time as I feel we need to bring some structure into the…

    10 Comments
  • Microsoft Digital Defense Report, Key Learnings

    Two weeks ago, we published the Microsoft Digital Defense Report, a document definitely worth reading. The report…

    1 Comment
  • Disrupting Security – SecDevOps

    We looked into the overall security approach - the Trusted Digital Fabric, the culture change, what is needed from an…

  • Disrupting Security – Zero Trust

    When we looked at the initiatives in my last post, it is time to dive into them, one by one. Looking at Zero Trust, I…

    6 Comments
  • Disrupting Security – The Initiatives

    So far, we talked about the trusted digital fabric in my first post and the changes in the culture in the second. Now…

    1 Comment
  • Disrupting Security – Which game do we play?

    As stated in my last post, we need to re-align security and drive it to the next level. I do not mean technology; I am…

  • Disrupting Security – The Trusted Digital Fabric

    Introduction Over the course of the last months, I had quite a few discussions about the future of Cyber Security and…

    2 Comments
  • Security in Areas of Increased Threats

    From time to time I get asked what to do if the threat landscape changes dramatically. Honestly, the security you can…

    2 Comments

Others also viewed

Explore content categories