There is an old adage that you cannot fit a quart into a pint pot. In Imperial measurement, for those of us without a pre-decimalisation education, one quart is equal to two pints. However, in todays computing parlance, do it in the Cloud and you can.
A week or so ago Richard and I attended the partner presentation by Microsoft of their Server 2012 product. The technical stream was delivered by the very enthusiastic and knowledgeable Microsoft Technical Evangelist, Andrew Fryer. The purpose of the roadshow was to showcase the capabilities of Server 2012, and Andrew’s presentation delivered an overview of server management and capabilities being offered.
One major investment in development by Microsoft from Server 2008 R2 to Server 2012 is the Hyper V functionality, Microsoft’s own virtualisation hypervisor. This platform, has for some time powered Microsoft’s own Cloud offering, Azure.
The demonstration of Hyper V in 2012 included moving a virtual machine from host to host over a network crossover cable. This move was made with the machine still running, and the transition’s impact to simple response test was the loss of 2 ping packets during the 5 minute or so move. It is fare to say that dynamic guest management isn’t new, but Microsoft are certainly giving VMWare a run for their money.
So how do we get the quart into a pint pot? Virtualization technologies benefit on-location IT shops and service providers by apportioning the resources on any host computer using thin provisioning, and resource sharing CPU and Memory. In real terms this means a computer with, say, 4 CPU cores can spread them across a number guest virtual servers, each one thinking they have up to 4 each. The CPU computational cycles are then given out and managed by the hypervisor layer. Discs are allocated as space, which can be set to grow as the guest computers demand more. Andrew demonstrated 4 x 20 Terabyte drives being provisioned and virtualized in a redundant configuration on a laptop.
Obviously the cost saving in computer tin and power being leveraged here is significant. But effective virtualisation relies on the base assumption that the systems being hosted in this way do not need all of their allotted CPU, memory or disk at the same time. If they do, the inevitable will happen, the pot will overflow.
We see this in our work in Performance and Non-Functional testing services. In virtual environments the risk of over-provisioning and over committing of resources can cripple a system being hosted on a platform. The situation can be easily managed through good capacity planning, demand management and testing. When a system is being implemented, the design team needs to understand and share the non-functional requirements with the testing team, and the operations team so the platform can be tested and delivered appropriately.
So with virtualisation, one quart can fit into a pint pot. Do it properly and you might even stretch to a full yard (of ale).
To follow Anrew's blog, click here. Thanks to BeerTown, Austin for the image.