Network Function Virtualization in the DevOps Age

Paul Liesenberg and Tugrul Firatli

In this blog, we will take a look at Network Function Virtualization (NFV) as it adopts design patterns that are currently revolutionizing the way applications are being delivered, namely containers and microservices architectures.

Depending on who you listen to, NFV is already a success story, or on the other hand is about to go from the “peak of inflated expectations” phase into the “trough of disillusionment” phase (as per the Gartner Hype Cycle). What is undoubtedly true (and easy to find with a web search) is that several early deployments complain of performance and interoperability issues.

Some people use the term NFV loosely to refer to any network service that creates a virtualized functional overlay the underlying physical infrastructure is completely unaware of. But what we are talking about in this blog is the ETSI NFV definition, that is, a reference architecture that is at the core of the largest emerging network infrastructures. And while the architectural concept is intuitive enough, its implementation is daunting, especially given the scale, the resiliency requirements as well as the cutting edge nature of the open source projects that are used in ETSI NFV implementations: it goes from OpenStack to OPNFV and OVS, and then again incorporates a plethora of vendor plug-ins. And of course, as service offerings we have the VNFs (Virtualized Network Functions). These functions are ultimately pure software versions of network services that –until recently- would have resided on a dedicated network appliance. Things like Firewalls (FW), Session Border Controllers (SBC), Carrier-Class Network Address Translation (NAT), Deep Packet Inspection (DPI), Intrusion Prevention and Detection Systems (IPS/IDS), and so on. And these services (aka network functions) are implemented as VMs.

The fact that something like a Firewall –or any other VNF- is implemented as a monolithic software application on top of a VM may explain some of the performance issues that are being experienced in current trials. 10 years ago we could still have waited for hardware to invariably catch up and save the day by running twice as fast every 18 months (Moore’s Law). But these days hardware progress is not as predictable as it used to be.

Incidentally, in the application world, that’s why early adopters of virtualization started to architect applications in a more modular way in order to be able to optimize performance. The software patterns went from monolithic, self-contained applications running on dedicated mainframes or servers, to a new model that first embraced concepts such as SOA (Service Oriented Architecture) to Web Services (WS) and RESTful calls. And now, finally, microservices. What’s a microservice? It’s the complete deconstruction of an application into a multitude of separate parts, each one of them running on a lightweight “container” like, for example, Docker or rkt or Linux LXD. Why containers? Because VMs (with a whole OS to boot) are simply too heavyweight to host microservices (but containers can run on VMs, and it’s a common practice for several reasons I will not go into). To sum this all up: by deconstructing a service into microservices, it’s easy to quickly address any performance issue by spinning up multiple instances of the particular microservice that is causing a performance bottleneck.

It stands to reason that network functions will architecturally evolve the exact same way the rest of the application world did. Why should a firewall be *one* big monolithic service? It does architecturally consist of a fast path (if the connection is already established), a connection establishment check that can be quite straightforward if it complies with trusted rules, and also an exception path. The exception path applies more complex policies if the connection request is not immediately compliant. Why shouldn’t all three different elements be implemented as separate microservices that can be scaled up in any way required to optimize the overall Firewall network function’s performance? They should, and eventually they will. Not just for a virtualized FW, but other VNFs, too.

A microservices architecture also has several other benefits. One of them is the ability to continuously innovate by embracing an agile DevOps model. Companies can constantly push out changes without taking down the entire application: testing a microservice involves a very few test cases, testing an entire monolithic application takes a complete catalog of cases.

But when you mix microservices, containers and a DevOps model to support CI/CD operation models, you need a new set of tools to create, test, distribute, deploy and monitor the container pods that implement any particular application, including, in the near future, VNFs. This doesn’t replace the ETSI NFV architecture at all – it just feeds into it. So a particular microservice runs on a container. Then Kubernetes orchestrates the dynamic instantiation of millions of those containers in pods, according to performance and resilience and security requirements. A dizzying number of available tools control the different elements of the lifecycle of a container- and microservices-based application delivery model.

The integration of so many toolsets can be daunting for companies embarking on a microservices-based DevOps journey. As the networking world –like the Devops world- embraces containers and microservices, Agile Stacks simplifies DevOps automation by providing validated stack templates for Docker containers and microservices: automated to be secure, reliable, maintained, and supported. These stack templates allow companies to easily define their own architecture.

To sum it up: the potentially complex integration of DevOps toolsets to govern a microservices-oriented environment should not stand in the way of network companies that want to explore how deconstructing monolithic VNFs into a microservices-powered offering that can deliver on constant innovation, optimal performance and improved security. 

To view or add a comment, sign in

More articles by Paul Liesenberg

  • Why Most People (and Companies) Misuse AI Terms, and How To Fix It

    A Simple Guide to Training, Models, Agentic Platforms, and Agents (co-written by Boli, “my” British Shorthair) :-) AI…

    4 Comments
  • The Power of Renewal

    I thought today I would take a break to simply share some thoughts about life in high tech as someone who has been…

    13 Comments
  • The Last Router

    By Paul Liesenberg and Hugo Vliegen | February 4, 2020 https://www.aryaka.

    1 Comment
  • Embracing DevOps with Aryaka Cloud-First WAN

    In a previous blog, I wrote about network engineers playing second fiddles (never mind that I mentioned Leonard…

    1 Comment
  • Why Database and Blockchain Technology Need Each Other

    By Paul Liesenberg and Tugrul Firatli, BCware A very common pattern emerges when discussing Enterprise Blockchain…

    2 Comments
  • Blockchain in Core Networking Technology? Why?

    While use cases for Blockchain technology abound in pretty much every potential market, the networking world still…

  • QoS in the Virtual Overlay Network Age

    This blog is not endorsed by any company, it represents a personal opinion after consulting and working in network…

Others also viewed

Explore content categories