As a performance tester I’m often asked to predict the behaviour of a client application based on test results. This is often difficult and I was reminded of this when I saw this xkcd cartoon recently.
Often I’m asked to predict how many users a website will handle based on test results. For example, I may run a test on a cut-down version of a production system, the test system may have 2 webservers and the larger-scale production system may have 5 webservers.
If our 2-server system can handle 200 users, why isn’t it safe to assume that the 5-server system will handle 500 users?
Using extrapolation to predict the scalability or performance of a system is rarely possible in performance testing. The diagram below illustrates some of the reasons why.
The “real” network is invariably more complex than the test environment, the network topology or application architecture may be different. As well as this the patterns of user behaviour may not be as predicted, meaning that performance tests were unrealistic.
A few key differences:
- Network connection speeds: Some users are connecting via slower mobile networks, this may hold connections open longer and affect overall system performance. Mobile users may use a disproportionate number of connections due to the higher network latencies. Other, local users, may connect at faster “LAN” speeds. I once worked in an office where a network upgrade brought the Exchange mail servers down. Until the network upgrade, the mail servers had their traffic “nicely throttled” by a slow network. Once this bottleneck was removed the servers couldn’t handle the higher throughputs
- Network load balancing: ideally this should be the same in production and test environments. In my experience it rarely is!
- Sources of load: In our example test environment, we may only simulate user load on the webserver and database server. But what about other load on these systems? In a world increasingly built on SOA principles, what else is communicating with your database servers, contributing to network traffic or accessing your shared storage?
You may feel that in this article, I’m arguing against performance testing. Far from it, performance testing is vital, I’m only highlighting some of the pitfalls. Testing will help you to find and fix problems before your “real users” do, this is exactly what you want.
As well as more robust testing you need to do the following:
- System monitoring, check real against predicted performance
- User monitoring, check real against predicted behaviour
- Repeat tests once you have this new data
- Test early and test often. Repeat tests over and over again throughout the SDLC
- Test your application from “all angles”, consider the use of stubs, test harnesses or service virtualisation technology to supplement your performance test tool
This article was originally published at bish.co.uk and is reproduced here to reach a wider audience.