Cometh the AI
Dreamstudio AI (C) Tim Strong

Cometh the AI

AI is all over the news at the moment, so I figured I'd throw in some thoughts of my own. Anyone who has worked with me knows that I like to ponder the 'so what?' - which inevitably means trying to think of the worst set of scenarios that could play out given any new tech!

I've bubbled my current thoughts up into two main areas, that I outline below.

Trust in Authentication Factors needs a re-think.

Shon Harris (RIP) wrote the Information Security handbook that many of us studied to pass our CISSP exams. I remember her brilliant explanation of what a 'factor' is, and how it is based on 'something we know, something we are, or something we have.'

At face value it's easy to think that isn't going to change because of AI. However, we need to consider that all of those factors have to go through some form of digitisation in order to be useful in the online world. As an example, we assume that if we ring someone up and speak to them over the phone, and we hear 'something they have' - their voice in this case - then that person is 'who they say they are.' That voice isn't really the persons voice. It's been sampled by an analogue to digital converter, turned into a bitstream and encoded, sent over many different wires with many different protocols, and then decoded at the other end and eventually played back through a digital to analogue converter (i.e. a speaker!) to the person receiving the call.

AI is already at the stage where it can mimic someone's voice to the point where it's impossible to tell if the voice is real or fake. Some great examples of this are on YouTube in the form of AI created impersonations of famous singers.

The ability to manipulate audio and word based streams in real time, in order to convince someone that something is different to reality, is already upon us. Video is just around the corner. (In the time it took me to write and re-draft this article, OpenAI announced Sora!)

If we put these capabilities in the hands of an attacker, it creates some alarming scenarios where what was previously considered to be 'authoritative trust' can be falsified.

Imagine this scenario - A business banker who has known his business client for the last 15 years calls her up to check that the 500k$ transfer request she just put through (at the same time as a phone number change) is genuine. He calls her, immediately recognizes her voice and just rushes through the phone authentication steps because she has told him she's in a bit of a rush at the moment. The trust placed by a human in the recognition of a voice and a 'human interaction' causes us to ignore policy and wave things through.

Now lets imagine that it's a million dollar request, so this time he needs to be really sure and does a quick facetime call. Yep, definitely her, no issues - The approval gets done.

AI is an enabling technology for the realtime generation of completely fake, but entirely convincing text, audio and video of real human people. It will only take the insertion of the technology into the authentication chain to completely fool current 'high security' (e.g. multi factor) authentication methods.

Admittedly, AI isn't yet at the stage where it can pre-compute the next number generated by something such as Google Authenticator, but the technology it's bringing allows the complete bypass of the mechanism. A manager might receive a video call from their 'employee' for example - 'Hey boss, I lost my phone and now I can't get at my authenticator app and I cant sign in, any chance you can raise a SNOW ticket and get it approved for me my new phone?'

Speed

Attackers have never been constrained by the business day or limited budgets. More and more we are seeing motivated, organised and well funded threat actors putting the time and effort into utilising new technologies to gain an advantage. It's arguably easier to use AI to attack something than it is to defend it; because in the case of an attack, the technology can be trained to do a very specific thing very well. Using the scenario's sketched out above, it's easy to see how AI can be used to attack, but it's much harder to envision how would it be used to defend against such attacks. The speed of adoption is likely to be much faster for attackers than defenders.

In addition, the speed of a systems level attack driven by AI would be almost impossible for a human actor to defend against. For AI to be able to truly defend, it needs to be able to command all our technologies as if it were a human operator; we need to implicitly trust it with 'administrator' type access to all of our tech, so that it can do it's thing when needed and act across all of our controls with the speed of an attacking AI. This all seems like something from a William Gibson novel, but the concept of bad and good AI's battling against each other in Cyberspace was written up a long time ago!

So what can we do?

1) Authentication needs to cater for 'joint-known' rather than 'pre-shared' secrets.

Authentication processes need to be able to tap into authentication sources that aren't easy to pre-compute. Current AI generation takes time; so any form of entropy in an interaction introduces a need for real-time reaction; where currently only the real human could respond. Instead of prompting for secrets like 'where were you born' (it's too easy to know this and pre-render a video with that as an answer) we could prompt for situational or scenario based interactions; whereby a shared history is built up and someone is tested for knowledge of that shared history; asking 'can you tell me the approximate amount of the purchase you last made at <retailer of choice>' as an example. Two or three of this kind of question as part of an interactive authentication sequence would make the number of pre-generated answers required to pass authentication 'not worth the effort'

2) Authentication needs to go open ended

We should start catering for open ended Q&A scenarios, where there is no 'known answer' that is stored in advance. Instead, authentication might become an interactive Q&A; "Can you quickly tell me what your mothers maiden name added to your street number would be?" or "if you took the last two digits of your phone number and added them to your postcode, what number would you have?" This is building 'liveness' testing into the authentication process so that pre-computed answers to the normal, scripted and predictable authentication challenges can't be prepared in advance.

3) We need to get on the front foot with LLM models and collaborate more!

WormGPT and FraudGPT are the bad guys doing their thing. That's expected, but where is SiemGPT or ioC GPT? I've always believed that cybersecurity needs to start acting as a single entity across all industries and employers; We can't share enough and we can't leverage each others capabilities and skillsets enough. When you're on the receiving end of an attack, how nice would it be to have a 'task force' of colleagues that suddenly appear to help? If you're the focus of a new, customised AI that's hitting you with some new attack, having sheer weight of numbers in being able to respond would be an awesome countermeasure.

In pandemic planning circles, people are cross trained and pre-vetted to do each others jobs should the worst case happen. Wouldn't it be great if there was an inter-working agreement between different employers so that in an emergency, they could each call on a proportion of the others workforce(s) to assist?

And a last note

Probably the most striking thing about the coming AI shift is the speed at which it will be upon us. I would love everyone to watch <https://www.youtube.com/watch?v=xoVJKj8lcNQ> which is an amazing Ted talk about the AI moment in history that we're at. As an industry, there is always much for us to do, but this talk hammers home it's never been more important than now!

Very timely and thoughtful post Tim, and I'm a big fan of the virtual team response. Just having a number of different roles keeping an eye on GuardDuty events proved to be very effective.

Like
Reply

To view or add a comment, sign in

More articles by Tim Strong

Others also viewed

Explore content categories