A couple of thoughts on VMs

I made a bold statement in a presentation this week about the performance reduction of virtualization for Big data deployments, which is correct.  That said it is also correct to say based on design and performance requirement this may not matter. Here is an excerpt of a paper I wrote last year on virtualization.

As we think about virtualization. Each virtualized operating system, or guest, is hosted inside a virtual machine, and multiple guests share the same hardware, though none has exclusive access. This magic is managed by a  hypervisor, which provides the virtual operating platform for each of the guests.

The hypervisor adds an extra layer on top of the hardware operating system so that it can schedule and route system requests from simultaneously executing applications running on the multiple guests. It’s a complex task to ensure that everything gets to it's destination at the right time without interference. Doing so efficiently is even harder, which is why there are two important side effects of virtualization:

    The hypervisor adds a measure of overhead to every hardware request. This is typically minimal, but one can only be sure by monitoring the overhead of your environment.

    Hardware is shared and finite, and no guest is assured instant access to these resources. Delayed availability of the CPU can cause increased latency, known as steal time, which can seriously degrade performance and scalability in ways that standard utilization metrics don't generally measure. This is a statement that can be made (today, maybe yesterday) for all resources (CPU, I/O, memory, and network).

As we work to assess, design, build, and deploy virtual environments with Virtual I/O based solutions. There are performance expectations and requirements with results that must be achieved when utilizing virtualization. These specifications and configurations that are the customers’ need to meet requirements when running the workloads and SLAs that are virtualized.  I have seen a variety of methods used in analyzing virtualized performance. They all fall short based on scale and balance. This does not mean that it doesn’t meet expectations it means that it requires more continued balancing and scale. The second challenge is multi-socket environments that have a NUMA model which adds additional configuration modeling, loading and monitoring.

The concept of virtual I/O was purpose built to be a good enough on a shared platform for server consolidation and virtualized workloads. A design principle of virtual I/O is to share a resource over a shared physical HBA/controller in a dynamic environment. This can also be modified and enhanced by using a direct I/O assigned to physical controllers for reduction in multi-queued shared requests. There is no statement that should be made that VIOS is slow, it is slower based on latency than bare metal or direct I/O, but there is the question is it fast enough for the SLA’s.

 

This all said the SLA’s, workloads, design and combining on managed virtual servers is the correct answer for your efforts. It is a depends answer.

To view or add a comment, sign in

More articles by Chuck Gray

Others also viewed

Explore content categories