AI - Bubbles and the "Ultimate" Complex System
"Like Sands through the Hour Glass... So are the Days of our Lives"

AI - Bubbles and the "Ultimate" Complex System

My whole life, I’ve been intrigued with pattern spotting. Pattern spotting relates to the ability to see underlying interconnections in “complex systems”. A pattern is starting to emerge around Artificial Intelligence (AI) and I’m fascinated about what it could mean.

First, a confession. I’m one of those people who think AI is going to change the world, dramatically. Think printing press, flight, or the internet. Then think bigger.

AI will be that impactful.

That said, the current hype around AI somewhere between 90-95% pure bullsh*t. You see it everywhere. Newspapers, television, certainly on LinkedIn, at conference ‘panel discussions’ and in the workplace. According to the arm-chair experts, coding is dead, and agentic AI is about to replace knowledge workers. You know it’s getting silly when Ray Dalio’s clone is making recommendations on “AI Clones”.

If AI proves to achieve what appears to offer for the future (with future measured in many years, not a few months), AI is about to become a key enabler of the all-time greatest ‘complex systems’ known to mankind. And, I believe, the underlying patterns with respect to AI are beginning to show themselves.

Before diving in though, it may help to define pattern spotting and describe what a complex system is.

Pattern Spotting and Complex Systems

Simply put - pattern spotting can be defined as the ability to recognise underlying trends/patterns in complex systems. It begs to question then - what’s a complex system?

Whilst I could go down a rabbit hole in trying to define a ‘complex system’, I’ll keep it high-level.

A complex system is made up of many interconnected components/agents that interact with each other which result in a collective behaviour that can’t be predicted by the behaviour of the individual components.

A company is a complex system, a community is a complex system, a computer is a complex system, and the internet is a complex system. You get the idea. Basically, the world around us is one, gigantic complex system.

A pattern spotter excels in seeing underlying insights, characteristics and tendencies in complex systems. Pattern spotters may also see interconnections between sub-components that others fail to recognise.

Personality traits of a spotter include insatiable curiosity, resilience, effortlessly flipping between micro-detail and macro-overview and the ability to survive & thrive (a.k.a. operate) at the edge of organised chaos. Skills of the pattern spotter include finding solutions for difficult/complex challenges (problem solving), the ability to think big picture, attention to detail and/or ability to hyper-focus for extended periods of time.

For me, the intrigue comes from unpicking how things work. I’m sure my math & engineering background and a career focused on problem solving in ever-increasing complex systems/situations are equal contributors to the fascination. Anyone who knows me reasonably well would say – “yeah, pattern spotter, that sounds a bit like Jeff”.

So, in true ‘pattern spotter’ behaviour, what do I believe I’m seeing?

The Mother of All Bubbles?

It’s no secret that billions of investment dollars are pouring into AI. In the first half of 2025, AI related capital expenditure contributed 1.1% to US GDP growth over that period. This outpaced US consumer spending as an economic driver for the period. Investment in data center construction to support AI compute is projected to surpass investment in traditional office space and over 70% of the total equity venture capital investment this year are in AI-related industries.

The capital market perspective is equally as interesting. Nvidia accounts for 26% of the S&P 500’s year to date advance.  Nvidia, Meta, Microsoft and Broadcom have contributed over six percentage points to the S&P’s year to date appreciation of 10% according to DataTrek Research. "If there had been no Gen AI buzz this year, the S&P 500 would likely be up 3% to 4% instead of 10%," Nicholas Colas, co-founder of DataTrek Research, wrote.

Data point #1 – there’s a hell of a lot of investment going into AI

There are also the circular financing and self-dealing occurring with the AI space. Nvidia invests $100 billion in its customer, OpenAI. OpenAI and its data center partners buy chips from Nvidia. Microsoft, Oracle, and CoreWeave (a data center operator) receive funding from OpenAI and lend to Nvidia.  And around it goes. Barrons recently published a note outlining the circular financing that’s happening between the main players.  Worth a read.

Article content
Circular Finance

As is often said, a ‘picture is worth a thousand words’ and the image from the article says it all.

Data Point #2 – leverage is increasing and transparency decreasing

And then there is the hype. Earlier in this piece I mentioned Ray Dalio’s clone.  Like, really? Perhaps his clone can offer a view on AI hype? I'll need to check that and get back to you.

Recently, I went to an “AI” dinner sponsored by a software provider whose product-market fit ambition is to offer AI-powered tools to enhance developer productivity. Besides the fact that the sponsor spent a lot of money on the dinner venue and event (organised by an event group who were also being paid), the major theme throughout the night was FOMO and a lack of clarity as to what the ‘killer application’ is for AI within business. Whilst it was a good event and a positive that like-minded professionals are discussing the innovation potential, there was little to no discussion on benefits realisation, measurement or how the spread of AI will be governed.

With respect to governance, one thing that struck me was the lack of clarity on ‘ownership’ within organisations. Who owns the AI or agentic agents being deployed?  The person who deployed the technology?  Or is it the department that houses the person who deployed the technology? Maybe it should be IT department; hell, it is software, right? Or is it the company itself. Who is accountable in the event of a problem? One participant talked about his/her organisation having deployed over 500 agents – yet had no clear answer on ownership, accountability or governance.

Data Point #3 – to keep pace with the hype, it’s being deployed with little consideration of the risks

This is starting to feel a bit speculative, as in speculative bubble, and a big one at that given the trillions of dollars that is being sloshed around the place.

And I’m good company here.  Scott Galloway’s most recent newsletter, ‘No Mercy/No Malice’ outlines exactly what I’m on about here.  Interestingly, I started this blog and then stumbled onto his newsletter when researching statistics around AI-related stocks and percentage contribution of AI investment related to GDP growth. Great minds think alike…so the saying goes. Scott’s mind seems pretty good, not sure how great my mind is but perhaps it’s just that fellow pattern spotters think alike?

Like Sands Through The Hour Glass...

This is starting to feel like previous, historical cycles where the scale of the investment, leverage and market ‘self-dealing’ is adding up to a mega-investment that (so far) is outpacing the economy’s ability to absorb and leverage the new innovations.

Sure, executives are using Claude to write board papers and students are using it to ‘cheat’ on their homework, but does anyone think the benefits and payback of these use cases are worth trillions of dollars of investment? (Note: I joke about students using AI to cheat – I believe it will be through the next generation that the truly innovative applications for AI will be first invented. The value is not likely to come from doing the same things we do today.)

Like the railroad investment boom, or more recently the dot.com boom, it’s quite possible we’re a bit overzealous and based on all the promise/hype, have grossly underestimating how long it may take to realise the value/efficiency/utility of the innovation we call “AI”.  It took 30+ years for the investments in railroad networks to start to pay off; the entire cycle (investment->boom->bust->adoption) may take decades.  

One way to think about the hype and AI innovation ‘race’ is to consider another historical example, the Space Race (circa 1960). This was when nations competed against one another to put a man on the moon. Many great things resulted from the Space Race, besides simply putting a man on the moon. It spurred the development of the semiconductor, advanced aeronautical concepts, and ultimately the microcomputer. The AI boom feels like that pattern. There will be great innovation that comes from AI but like with the Space Race, who predicted the home computer market and Microsoft in 1962?

It's easy to say it’s a bubble, much harder to prove it and/or to predict if/when it will pop.  Let’s face it, predicting markets, bubbles or if/when it will pop is a mugs game. There’s very likely more room to run here, for the investments to continue and the stocks to continue to climb as nations ramp up investment to compete against each other to develop AI innovation.

A useful model for thinking about this (and complex systems) is the ‘sandpile’ analogy.  Sand piles can grow from dropping single grains of sand down. The pile grows and grows until it reaches a critical state, triggered by a single grain of sand.  That last, single grain is no larger than all the other grains of sand but at a critical juncture, the avalanche starts and the pile collapses.

How big the sand pile grows, and how/when it will collapse - well, that’s anyone’s guess. Pattern spotting can only take one so far.

It sure will be interesting watching it though.

Jeffery Eberwein is a senior business executive specialising in technology, data and artificial intelligence and their implications for business. The views expressed in this article are the views of the author and not the views of any affiliated or referenced organisation, either directly or indirectly


Jeffery Eberwein: You mentioned no one predicted home computers in 1962. I just went down a rabbit hole to imagine that the AI byproducts might be even more fundamental: What if, over two decades of AI development, systems evolve into sophisticated architectures capable of genuine cognitive partnership. Neural interface technologies transform brain-computer communication into bidirectional cognitive integration. By year 10-12, early adopters begin merging their cognitive processes with AI systems, fully augmenting their conscious experience with artificial intelligence. These integrated individuals experience reality through fundamentally expanded cognitive architecture, processing information across scales and dimensions inaccessible to baseline humans. Their motivational systems transform. Core human drivers like social status, dominance, purpose lose relevance or reshape entirely. By year 15-20, adoption reaches critical mass. Collective human behavior transforms unpredictably. Everything that once motivated human behavior changes.

Like
Reply

Yeah — pattern spotter, that sounds just like you, Jeffery Eberwein 😄 It feels lot like the dot-com bubble all over again. I still remember sitting in Darling Harbour in Sydney, building e-biz sandcastles the waves soon took. The scale’s bigger now — but all the same patterns are right there to see.

To view or add a comment, sign in

More articles by Jeffery Eberwein

  • Claude Mythos and Project Glasswing

    A few weeks ago, there was a leak regarding Anthropic's upcoming release of their latest AI model called Mythos…

    6 Comments
  • AI and the Future of Labour

    It’s getting harder to filter the signal from the noise when it comes to the impact of Artificial Intelligence (AI). Is…

    5 Comments
  • 'Single Serving' AI Friends

    There’s a scene in the movie “Fight Club” where the main protagonists, The Narrator and Tyler Durden, are introduced…

    11 Comments
  • Harry Potter and the Architects of AI

    The narrative around Artificial Intelligence is starting to shift. Until recently, the talk was around bubbles…

    1 Comment
  • AI in Practice

    Seems like yesterday when we all became aware of ChatGPT. My awakening came from the LinkedIn platform.

    14 Comments
  • AI vs. the Humble Spreadsheet

    The ‘exclusive’ luncheon offered by Microsoft seemed like a good opportunity to learn more about the future direction…

    4 Comments
  • What would ChatGPT make of the OpenAI Fiasco?

    One wonders what ChatGPT would return in response to a question regarding the situation that transpired over the last…

  • The Democratisation of Data

    Unless you’re living under a rock, it is nearly impossible to miss signs of the data revolution that is happening all…

    5 Comments
  • The Collapse

    During my EMBA, we had a class on Business Law. The dean of the program taught the law class and used a fascinating…

    5 Comments
  • The Andersen Story

    My career didn’t start in the field of 'consulting'. Both my first job out of Uni which lasted about a year and then my…

    3 Comments

Others also viewed

Explore content categories