In some recent performance tests of a video-streaming service for a UK university we were reminded just how important the network is for good application performance. In the days of fibre broadband, this can be easy to forget but you need to include it in your capacity planning calculations.
As performance testers, were often asked to run tests to assist capacity planners understand the theoretical limits of an application. This type of testing is designed to stress application and database servers and we often find performance bottlenecks which need to be resolved before an application can be declared fit for purpose. More often than not, poor performance is related to application and database configuration issues. However as performance testers,we need to keep our eyes open to identify other performance bottlenecks.
How we tested video streaming
Trust IV was recently asked to run some performance tests to determine the capacity of a video streaming service designed to deliver lecture recordings to university students. The system that we were testing was hosted at an external video streaming provider and we needed to test the performance for students at multiple university locations within the UK.
Despite some initial difficulties, we managed to produce scripts which downloaded the video blocks in the same way that a client did. We looked at the typical client-server traffic and found that before the video started to stream, the video server sent a HTTP response containing an index. The index included a list of all the video blocks and the length of these video blocks. It also gave details of different video resolutions that were available.
Video resolutions could be set to Automatic, High, Medium or Low quality by the client and the audio stream was also sent as separate blocks. For our tests, we simulated the high-bandwidth video stream.
We wrote a function that parsed the index and then looped through the blocks of video, requesting each one sequentially in the same way as the browser client. By counting the number of blocks and their size, we were able to determine whether each client was receiving an uninterrupted video stream (i.e. downloading blocks at a faster rate than they could be played back) or whether buffering was occurring. We determined that the high-bandwidth video stream needed around 1.5 Mbits/s of bandwidth per user to deliver uninterrupted high-quality video content.
During LoadRunner script development, we added user defined data points into our LoadRunner script so that we could determine on a per-user and per-site basis the download rate in bits/s. This allowed us to identify the point at which buffering occurred for each of our customer sites.
Our test produced some interesting results. We were ramping up to a load simulating around 400 users. The output of the tests was as shown below. The green line shows the number of users increasing at intervals. The thin red line shows the network throughput.
It is apparent from the graph that the network reached a maximum throughput in the first few minutes of the test (with less than 100 users) streaming video. Despite adding users at 15-minute intervals in the test, the overall network throughput did not increase, indicating a network bottleneck.
When we looked at our results, we could see that peak network throughput for our test overall was approximately 20Mbytes/s. Converting this to Mbits/s (network people prefer bits/s over bytes/s) gave us a peak network throughput of approximately 160Mbit/s.
Once we gave the network team this news, the bottleneck was immediately apparent. Their Internet bandwidth is 200Mbit/s and consequently we were reaching the capacity of their network. Despite the application servers performing well, the network was this project’s Achilles heel.