Behind The Cloud
Information is everywhere but insight is all too rare, when it comes to cloud computing every IT professional has an opinion but only few have really understood it.
If you google “Cloud Computing”, you will find results with definitions, features and characteristics of Cloud and difficult to understand why we need Cloud and what problem does it solve.
In the Beginning:
Back in 1990s, when client server era was booming, when servers and desktops led to distributed computing. There were server based applications which had clients. These server based application designed and required an entire physical server, because back then these servers were slower, having less CPU and Memory, less resources then what servers we are using today. So there is one to one mapping issue i.e. one application to OS to Server mapping.
Organisations generally host one application per server to avoid the risk of vulnerabilities in one application affecting the availability of another application on the same server.
Whenever there was new application to host on production environment, it would require to add one more Physical Server in the Data Center.
So number of physical servers were increasing in the Data Center, which was one of the reason causing the problem. On the other hand, companies like Intel were creating faster and faster CPU with more memory. Over time industry started buying more powerful servers. The growth in server and desktop deployments promoted new IT infrastructure and operational challenges. These challenges include:
• Application upgrade: To upgrade a particular application would require to reboot the server and it should not affect the other applications
• Increased Physical Infrastructure Cost: As more and more applications were coming up, requirement for servers were also increasing, Data Center were running out of space, hence there were requirements to host more Data Center. With more servers in the Data Center there were more requirements of power and cooling and hence more requirement of electricity, all this caused to increase the cost drastically.
• Increased Operational Cost: With more servers and complex environment, more specialize engineers were required to maintain the Data Center.
• Time management: Due to less automation at that time, Organisations spend disproportionate time and resources on manual tasks connected with server support and thus require more work force to complete these tasks. Imagine a 500 physical server, patch update, trying to find out cables, upgrade firmware.
• Powerful Server with less consumption: The companies like Intel and AMD came up with more powerful CPUs, and eventually what happened was applications were consuming less resources and it results in extremely low server utilization, underutilization of servers. So we have got these massive big Data Center, which is running all these servers, many of the server were siting ideal with 20-30% of utilization
• New business model: Business model was changing quite different, business wants their application up and running in a day or two, they cannot wait for 3 weeks to get a physical server.
All these challenges brought a need of change i.e. new era of computing which we today call as "Cloud Computing"
Innovative Thinking:
Application were running on OS and OS is a kind of box with separate security, applications, databases, network and storage associated with it. So industry started thinking about how to decouple OS from physical hardware. To answer to this question, IT industry came up with a concept of Hypervisor.
This New layer of software enables to install multiple OS on a single physical Machine.
Find below the diagram showing evolution from physical machine to virtual machine
Each OS had its own dedicated CPU, memory and space. With separate security parameters, this brings a big change in the server computing. If one server is under-utilizing say 20%, with hypervisor layer, multiple OS can be installed each having different applications, which increases the utilization of server.
So if a Data Center with 200 Physical Machines, with virtualization, there is a possibility of hosting 600+ Virtual Machines on same setup
Each server running on this new layer is called Virtual Machine. These Virtual Machine can be hosted on-premises in private cloud or on public cloud. In case of private, companies have to buy and install the hypervisor on their local Data Center, whereas companies like Amazon AWS provides a mechanism to host these virtual machine on their Data Center , which is on public cloud and these are available as a service.
Public cloud is first choice for small and medium companies.
With virtual servers, deployment time has drastically decreased from 3-4 weeks to 15-20 minutes.
This also brings new education of building Cloud aware application development. Now you can use these Virtual machine on cloud to speedup deployment and time to market for application. In last decades, physical machines virtualization has gone from virtual machines to containers.
With the exciting times ahead, it is the ideal opportunity for both Developers and IT Professionals to begin taking a gander at the containers.
Stay tuned for more in-depth information.
Time has drastically decreased,pls correct guess this is typo error thanks
Thank you for your vital input about the cloud computing. Would be great to demonstrate more about the differences between cloud proudcts vs on premises. In addtion, latley we see IBM and another big gintes offering SaaS as well IaaS proudcts. Many people want to scale thier exating infrastrcture and move some of environment into the cloud to save some cost of utilizing existing environments. Any tips or recomendations would be highly appreciated.