Skip to main content

Using virtual machines as load generators

I recently attended HP Discover and virtualisation was a hot topic for many of the testers that I met there. Discussions ranged from virtualising test applications and virtualising services during tests to the virtualisation of test systems such as load generators and controller. I was particularly interested in these discussion because my current client is embarking on a project to virtualise most of their hardware infrastructure (including load generators and test controllers).

I’m personally in favour of virtualisation, it has numerous benefits including cost reduction, the ability to perform snapshot backups of servers and the ability to duplicate/clone machines quickly if you suddenly need “more of the same”. I’ve used virtualised load generators on a number of client engagements without incident.

Despite this, many testing purists have concerns about virtualising load generators. They are concerned that VM performance is variable, that you’re adding “an unknown” into your tests (which is bad practice) and so on. These points are valid but it is possible to take steps to mitigate the risks associated with these concerns and then reap the benefits of virtualisation.

Here are the three main objections that I hear:

Objection 1 – “You don’t know what you’re getting with VMs”

This is particularly true if you’re using load generators on a public cloud platform like AWS. Performance is variable. The best way around this is to make sure that you “over provision”. For example, if you normally use two physical load generators for 1000 web vUsers, find a similar spec of virtual machine and use three of them. They’re still cheap and you can quickly “spin up” one, two or ten additional machines (even in different geographies) if you choose to do so. You’re trading performance for flexibility and low cost.

Objection 2 – “Performance of load generators is too variable”

This is also true. Performance will vary between (or even during) tests. This is only to be expected, because you don’t know whether the neighbouring VMs on your host are busy or quiet at the time of your test. Again, over-provisioning helps to mitigate the risk of “bad neighbours” but another good piece of advice is to repeat your tests (ideally three times or more). This way you can see if you have a statistical “outlier” before making a decision about the quality or your test or its results. You should be doing this anyway to look for statistical anomalies and validate all your performance tests.

Objections 1 and 2 basically boil down to the same thing: VMs introduce too much variability between tests. It’s hard to know how hard the VM host machine is working and the performance stats which come from the guest machine can’t be relied upon.

To avoid problems caused by this variability, as well as over provisioning and repeating tests, you need to:

  • Monitor your VM host machine and guests; make sure that total CPU utilisation (on both host and guests) stays below 75-80%. Any more than this and you may be experiencing problems without knowing it. (This is true of physical load generators as well, you’re meant to be thrashing the system under test, not your test infrastructure)
  • The same goes for network usage. Look at Mbit/s for your guests and make sure that none of them are more than about 60% of their theoretical maximum. Do the same for the host system
  • Look for correlations between changes in response times and changes in performance counters on your load generators in the same way that you look for correlations between performance counters on the system under test and response times
  • Measure key performance counters on the VM host as well as the guests and make sure that you aren’t seeing, high CPU utilisation, high numbers of context switches, high memory use, excessive memory paging, disk or processor queuing. In other words put your test infrastructure under similar scrutiny as your system under test
  • If you’re using LoadRunner or Performance Center it is worth knowing that Sitescope (which is included in your license fee) includes agentless monitors for VMware hosts, make use of them

This should reduce the risks of virtualising load generators but if you’re still concerned, run a test on VMs and then run the same test on physical load generators.

We just did this at a client site and we saw no measurable difference in our test results.

Objection 3 – Time synchronisation

One further problem in a virtualised performance test can be “clock drift”. When a VM starts up, it typically gets its time from the underlying host. If the clock times drift and the VM guest resynchronises its clock during a test, you may see strange test results. Clocks synchronising forward can increase apparent response times, clocks synchronising backwards can reduce apparent response times. Negative response times may be hard to explain in your test results! ;-)

You can avoid this problem by synchronising the time on all machines (physical and virtual to the same time source).

Alternatively, in some cases (such as in HP LoadRunner) you can alter the configuration of the load generator to ensure that it uses a consistent timer and doesn’t “jump”.  For example, to solve this problem, on VMWare hosted LoadRunner virtual machine, a customer could make the following configuration change.

  • Find the LoadRunner Load Generator “installation” folder.
  • Open the file “configm_agent_attribs.cfg” in a text editor.
  • Find the section "[General]".  (If this does not exist create one).
  • Append "VmWareTimeSupport=true" to the [General] section.
  • Restart your magent.exe process.

After this modification, the load generator will use the host machine time instead the system time of the virtual machine. (Watch out for GMT/DST discrepancies though!)
(This tip came from Mark Tomlinson via Linked in who in turn credited the original source, Petar Puskarich) 

More information
For more guidance on using virtualised load generators, watch Mark Tomlinson in action in this YouTube video “Tips for running Virtual Load Generators” 
or listen to Mark Tomlinson and James Pulley’s PerfBytes podcast, Performance in the virtualised world” 

I’d really appreciate your feedback on this as well as any observations that you may have about virtualisation in testing.
This really seems to be a hot topic and I’m sure that it’ll be featured in future blog posts!

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Enter the characters shown in the image.