Cloud Edge - a question of latency

Cloud Native enables entrepreneurial creativity to be focused on their core business goals, their code and a suitable operational requirement, all of which can then be seamlessly mapped onto available service blueprints and delivered through the chosen Cloud Service Providers(CSP) global and resilient infrastructure. 

Tick tock

Back in time, Utility companies keen to divest of homegrown SDH infrastructure found sanctuary in emerging MPLS/IP services from fledgling ISPs. On paper service switching of under 50ms could be achieved and keep the lights on, yet the inability to guarantee packet fill times occasionally pushed response times outside of 50ms and the sector had to rethink. 

Latency had won.

Tomorrow I should be sitting in my autonomous car, reading the news as it computes its way around London, sensing every element, ingesting real time traffic flows.  But what if those flows need low latency (<10ms) service round trip times? 

I need my computational resource spun-up not at the data centre ( >10ms ) but closer to the action, closer to the Edge.

At a recent seminar an analyst predicted 20% of all cloud compute would be at the edge, indeed several years ago at Anothertrail we modelled a micro data centre in the context of digital airports as a way of segmenting critical local application processing from cloud/data centre centric.  We have seen local X.86 platforms offering service chained VNFs and pondered the timing implications of aggregated management flows and the frequency of edge-core synchronisation.

But who controls the low latency Edge?

Enter the mobile operators, they are the masters of the Edge, with whom 5G will soon appear to delight the masses with application slicing on a grand scale. 

But I want more.

I want an orchestrated micro-service based IaaS, where the CSP provisions geo-locations within specific latency constraints. Even if the operator owns the requisite real estate do they have a powered facility to deliver the required compute/storage environment. Who pays, how is it charged and how is a low latency SLA delivered to competing CSPs?

Are we looking at a complete rethink of edge technology, solid state storage, low power compute, edge optimised applications, orchestration and billing?

Tick tock

With services like autonomous vehicles and augmented reality, low latency round trip service delivery becomes critical.  Just like the Utility companies of yesteryear, will the lack of orchestrated geographic cloud edge compute cause a rethink of certain services?

Has latency won again?

To view or add a comment, sign in

More articles by Chris Reid

  • Explainable AI Update : Self-Reasoning AI

    Ten months ago Jean-Michel Cambot and myself crafted a Lytn blog(see below) on Explainable AI (XAI) and the options…

  • LoRaWAN , Tank Monitoring and Estate Visibility

    Data will become the new gold only when organisation know how to mine efficiently. Take asset management.

  • What did I hear at MWC 2019

    5G is king It’s all about 5G apparently, with premium services opening up new revenue streams and increased margin. But…

  • Electrically charged employees

    In my agile world of continuous improvement, sprints and microservices, I demand a responsive and communicative…

    1 Comment
  • OpenStack Manchester

    Heads up on the first OpenStack Manchester event scheduled for the 8th September. http://www.

Others also viewed

Explore content categories