Security in the Age of Artificial Intelligence

Security in the Age of Artificial Intelligence

Science fiction initially expected artificial intelligence (AI) to empower robotics, replace labor jobs, and make data entry positions obsolete in favor of the enormous processing power behind modern machines and quantum computing. Instead, AI has excelled in research, creative writing, and the generation of art, permanently altering academia, advertising, and the artistic landscape forever.

If raytracing is the holy grail of graphics processing units (GPU), then sentience is the holy grail of AI. However, unlike raytracing, which successfully simulates the physics of light, simulating sentience requires an advanced form of AI that programmers have yet to master. Until then, security solutions that address the complexity of language models with large corpora of text while maintaining users' safety and functioning efficacy will become essential in the 21st century.

With the emergence of OpenAI’s ChatGPT, Google’s Gemini (Bard), Microsoft Copilot, Meta's AI, and xAI’s Grok, the world is witnessing the inception of a critical paradigm shift in the way that data is collected, analyzed, and delivered. Security professionals must focus on making these systems immune to injection attacks embedded in malicious prompts. Understanding ordinary web-based injection attacks such as SQL injection, Cross-Site Scripting (XSS), and Operating System Command Injection (OS Commanding) attacks can reveal attack and mitigation strategies that might be employed in this novel environment. Just as parameterized queries and stored procedures are used to mitigate SQL injection, similar measures can be taken for prompt injection.

AI is vulnerable because the humans who designed it are fallible, but they must be just as passionate about security as they are about ethics, if not more. Why? Just as the circumvention of these ethical barriers is already underway, the circumvention of security features is also in progress. In other words, the clock is ticking. Although defenders need to protect themselves from all points of entry, an attacker only needs to infiltrate through one, also known as the Defender’s Dilemma. As Christopher Hitchens once said, “The barbarians never take a city until someone holds the gates open to them.”

There is genuine concern over these systems' intelligence-gathering capabilities and potential to violate privacy in countries that aren’t already explicitly totalitarian, such as North Korea. One new variant of warfare is cyber warfare, and we have to adjust to the changing times. People are willing to divulge sensitive information to these companies and risk a data breach, even if those responsible are not held accountable. Developers must mitigate that risk for the company and the user's sake. It serves us all well when we can safely interact with AI in a way that derives maximum benefits while minimizing risk. While eliminating all risks might not be possible, a sincere effort must be invested in the cause.

AIs are a potentially serious security threat to intellectual property, biometric data, and classified information, to name a few. AI poisoning, a method of distorting art databases to compromise their ability to replicate art, is already happening from an intellectual and moral protest against these machines, such as Open AI’s DALL·E. Academic integrity is also subject to the destabilizing force of AI because universities worldwide are in crisis over rampant plagiarism due to the ease of access to AI content generated in seconds by a simple prompt. It’s also a potential security threat for nation-states whose classified and top-secret material has been collected by the AI engine in the same way that Google’s spiders sometimes crawl exposed directories.

Another significant field that could overlap with AI is quantum computing, which could potentially compromise traditional cryptographic protocols, such as RSA (Rivest–Shamir–Adleman), using Shor's algorithm. Evaluating the intersection between cryptography and number theory, such as the current method of factoring, yields practical solutions to these problems. New encryption methods from Post-Quantum Cryptography (PQC), led by the National Institute of Standards and Technology (NIST), are based on radically different mathematical problems, such as lattice-based, hash-based, code-based, and multivariate polynomial equations, which will be essential in the evolving era of quantum computing. Furthermore, AI can optimize Quantum Key Distribution (QKD) processes by improving alignment, stability, and error correction techniques that significantly reduce the quantum bit error rate (QBER) and enhance encryption resilience.

In the future, many possibilities for the synergy between AI, quantum machines, and robotics are on the horizon. Silicon-based lifeforms could contain AI that could rival our organic intelligence as carbon-based lifeforms; they could possess emotional states that would pass a Turing test; they could solve the Clay Math Institute's Millennium Prize Problems such as the Riemann Hypothesis; they could revolutionize the medical field with innovative new treatments and diagnostic power; they could empower investors with profound technical analysis; they could even work alongside cybersecurity professionals to make unbreakable encryption methods, intelligently manage firewalls, or even perfect the art of coding as they have already begun to do so. Soon, the only humans that will be needed in the workforce are those involved in developing and deploying these devices powered by AI so advanced that it will open the door for humans to solve problems at a rate that makes the universe's expansion look like a casual walk.

Imagine a bad actor altering the dataset for an AI engine in a pharmaceutical company developing medication, or one in the engineering department for a Boeing or defense contractor. I have more to learn about these datasets, specifically, just how dynamic they are, etc. people need to start thinking about it - I assume some are but haven’t seen it.

Not much talk about securing the data that an AI engine uses to ensure valid conclusions - subversion of AI. Any thoughts?

To view or add a comment, sign in

More articles by Antonio Gimenez

Others also viewed

Explore content categories