Expectations of an explosive growth in edge computing.
DALL-E 3: Edge compute and an airing of grievances.

Expectations of an explosive growth in edge computing.

While there are definite disparities, depending on who is reporting and how, it’s clear that the next five years will see a meteoric rise in the adoption of edge computing. Being a natural evolution of my work in NFV, I started following ETSI standardization of multi-access edge compute platforms since its mobile-edge inception in late 2014. Although ‘mobile’ became ‘multi’ soon after its introduction, its close relationship with 5G results in a confluence of both monikers, even now. The belief was that 5G represented a shift towards the support of incredibly low-latency applications, and everything gained in that regard would be for naught if that traffic was backhauled to a remote data center for packet processing and application computation.

Moving a (standardized or otherwise [1]) edge compute platform to the operators edge or customer premises would enable the deployment of decoupled 5G user plane functionality (UPF) alongside a such applications. This allows packets to be steered and processed locally, and then actioned on without the detrimental effect of additional round trip time delays. It’s incredibly compelling. So, while did I just talk about it in the past tense? Quite subconsciously, I might add. Well, 5G’s low latencies depended, in no small part, on the adoption of high band mmWaves that have not yet materialized in any significant manner because of ubiquitous spectrum availably, coverage cost, and more.

Moreover, with the primary markets being private enterprises employing (fully) shared spectrum, the issue was conflated with the type of listen before talk (LBT) techniques like CSMA/CA one must employ when contending for transmission times. Obviously, any preamble and then backoff utterly blows-up the promise of reducing latencies – even with mmWaves.

Now, anyone who knows me knows I’m not one to be overly negative. If the planets and moons align, ultra-reliable low-latency communications (URLLC) with 5G is possible. But the more tangible selling points of 5G in the enterprise (aka private 5G) are more secure communications with better coverage. Plus, there’s the potential for QoS-specific (dedicated) service ‘slices’ without the need to… well… slice.  At this point, however, you are betting against Wi-Fi, which never gets you good odds.[2]

With latency questions off the table, MEC may leave you wondering why you are bolting on a bunch of unnecessary compute and storage capabilities to your mobile core. That’s not to say some local compute isn’t still warranted for reducing backhaul or directly steering traffic onto specific SD-WAN connections, but 5G proponents must compete with the ubiquity and cost effectiveness of Wi-Fi.

Expectations of an explosive growth blah blah blah… take 2.

OK – I feel this post has gone off track – like I’m venting to my psychiatrist. I’m confidently expecting LinkedIn to send me a bill, at this point. Let’s get this back on track – paragraph one, sentence two: Depending on how it’s sliced and diced, analysts are forecasting the market size to be anywhere from half a billion US$ (MEC) to 15B US$ in 2024 (edge compute) growing to somewhere between 2.5B US$ or 5B US$ (MEC) and 32B US$ (edge compute) by 2030[3]. What my previous ranting did do is clear up some of the distinctions between MEC and edge compute. But there are more. And – again – the delineations are not always clear.

Naturally, with those sorts of numbers and growth potential, it is the edge compute market that is of primary interest to us. There are other standards bodies at work in this arena, however, namely the OpenFog consortium / Industrial Internet Consortium / Industry IoT Consortium who (while not changing their letterheads) remain focused on the application of distributed sensors and such.[4] That said, it is non-standard implementations of edge cloud that will make up by far the largest market share. While the hyperscalers will likely lead the charge, there are definitive brick walls they will (and do) hit.

Their primary advantage is also their Achilles’ heel – that being the commonality of control, operations, and services between their centralized location and those remote processing nodes. This creates a lock-in that all but the most devout convert of any respective public cloud provider will likely detest. There are other possible issues as well, ranging from concerns around regional regulatory compliances like GDPR (data sovereignty) to cost. While their edge offerings are somewhat of a purr, however, it would be a mistake to ignore the roar of their centralized supremacy.

It’s in these large datacenters that their economies of scale work in a consumers favor, and their often-lackluster approach to individual (edge) hardware profiles is mitigated by the sheer amount of combined processing power at their disposal. That’s the opportunity for others in the edge compute arena, which (5G-aside) can still be said to be dominated by the need for processor-intensive network workloads and applications. Those are of the type that can benefit from few transmission delays, backhaul bandwidth concerns, packet losses, and even multiple encode/decode hops before steering packets from ingress to egress.

The holy grail of this approach is the ability to apply a management overlay that can support highly customized edge deployments with specialized applications while leveraging a combination of multi-cloud cores that easily extend general compute capabilities. The hyperscalers put hooks in to develop such interfaces using REST: Azure has ARM (Azure Resource Manager), AWS their API Gateway, and Google a Cloud API. All of these are quite comprehensive, typically only missing newer or less commonly used features.

They are also kind enough to abstract the complexities of these APIs through various software development kits (SDKs). The advantage of being able to control deployments simultaneously across all three - along with a highly customized edge offering - from a common interface is invaluable. An enterprise is not beholden to a single public cloud platform, and an operator need only be familiar with one interface. Not that their portals - when taken individually - are obscenely unusable, but maintaining a deep familiarity with all three (especially when rarely used) is challenging.

Edge compute will continue to evolve over the next few years. Like everything else, the application of generative AI will be a driver. Frontier models deployed at the edge can balance performance with resource efficiency, targeting specialized services such as real-time intelligent anomaly detection. Indeed, with the increasing complexity of decoupled, decomposed, and distributed network functions, architectures, and applications, integrating AI at the edge, in this capacity, is likely the only way to ensure the ongoing integrity and security of enterprise and operator infrastructures.  Edge AI is also the counter proposition to ever-increasing compute demands of training and operating evermore complex large language models, unattainable to everyone outside the largest companies on the planet. Do most applications need GPT-5? Arguably not, unless I’m searching for the meaning of life.[5]

The sheer number and range of edge compute implementations is indeed daunting. And I understand this quick post has done little to cover the scope. That concerned me, a little, until I was reminded of my friend Tilly Gilbert over at STL Partners , whose job it is to track this market. Her annual summary of edge computing companies to watch runs to 100 organizations who cover every aspect of this area – some of which I’ve touch on here either directly or indirectly, others I have certainly not.[6] All have a role to play, a niche to exploit, or a compelling advantage to leverage.

I’ll be honest, my original goal of this post was a little synopsis of ETSI MEC, after having recently released their third version of the standard. I was quickly reminded that not everything MEC is the standardized version – and then not everything edge compute was MEC. After airing my personal grievances (thank you again for listening) I have emerged reenergized about the prospect for edge computing in general. It’s been a little journey of rediscovery for me, in that regard – removing the blinkers, if you will - so I am grateful for that.

I may well return to that original premise, at some point. Maybe sooner rather than later. But for now, I’m left with a greater appreciation for the prospects of this disparate collection of technologies to create meaningful architectural changes, application innovations, cost savings for those who actively embrace it, and increased revenue opportunities for its adopters.

 

 

 

 

 Footnotes

  1. Not every MEC on the market is of ETSI origin, which annoys me – but hold my beer because I’m about to lose my mind, apparently.
  2. I wrote about this exact topic here back in 2021, painfully trying to espouse the value of private 5G over WiFi. Ultimately, I lost, I guess – hence my new-found reality. 😊
  3. Courtesy Mordor Intelligence and Omdia
  4. A subject I highlighted here in a MEC blog in 2018 – apparently the heyday for weather related edge cloud nomenclature.
  5. It’s 42.
  6. 100 Edge computing companies to watch in 2024 - STL Partners

To view or add a comment, sign in

More articles by Simon Dredge

Others also viewed

Explore content categories