Skip to main content

A day in the life of a forgetful performance engineer

In early December, I was travelling down to the HPE Discover conference in London. I think that I’d allowed myself to be slightly less organised than normal, because rather than some far flung destination, I was heading to London on the train. This lack of prior planning and preparation caught me out when I realised that I’d managed to leave my laptop power supply at home.

I was travelling on Saturday afternoon and realised that I faced two days of meetings before the conference started without my laptop. I knew that having one posted to me wasn’t going to happen over the weekend, so I set about looking for one that I could buy online for immediate delivery.

My usual source of emergency, next-day deliveries (Amazon Prime) didn’t have what I needed, so I started to look elsewhere. I came across a switchable, universal laptop power supply on the Maplin website. With some apprehension, I looked to see how quickly I could arrange a delivery.

I was amazed to see that the power supply that I wanted was in stock at the store nearest to my London hotel and that they could deliver that day using the Shutl delivery service. My only problem it seemed was that the next delivery slot meant that the power supply would arrive at my hotel before I did! I chose a later, timed delivery slot and was able to use my laptop in confidence knowing that by the time the battery ran out, my replacement power supply would have arrived.

As I pondered my own forgetfulness, my thoughts switched (like all good testers) to the infrastructure that allowed this to happen.

Conventional testing wisdom suggests that the order process looks like this:

As I considered the mobile Wifi network on the train, post code lookups, payment and stock control systems as well as the various third parties whose interconnected applications had supported the order process. Even a cursory evaluation of the systems involved produced a complex infrastructure diagram.

As I ran through these systems in my head I started to think about how I might go about testing the end-to-end order process for a mobile user like me.

The application infrastructure

I had used an iPad connected to the train’s Wifi signal to place my order and despite experiencing a few signal drop outs, which meant that I had to refresh/reload a few pages, the session handling worked well and my order was placed successfully.

Beyond the train and its intermittent Wifi signal, the application infrastructure became even more complex. I started to think of the interconnections between each of the different systems which had helped to process my order.

A simple list of the third-parties involved in the order process was starting to develop:

 Supplier

 

 Virgin Trains Wifi

 Network connectivity

 EE

 Network provider for Virgin Trains

 Maplin

 eCommerce website

 Maplin Store in Docklands

 EPOS systems, inventory etc.

 Experian

 Credit / security checks

 SagePay

 Payment processing

 MasterCard

 Payment processing

 M&S Bank

 Credit Card provider

 Shutl

 Logistics and delivery tracking

 Local taxi company

 Logistics

 Google Analytics and Adwords

 User tracking etc.

 Answers (Royal Mail)

 Postcode database & address lookup

It’s highly likely that this isn’t an exhaustive list. What impressed me is that all this infrastructure worked when it needed to and all of this was accessed on a mobile device hurtling towards London using a shared mobile data connection on a fast moving train.

This begs the question;
Just how do you test this type of application?

End-to-end testing can be impractical (most test environments don’t include train Wifi) and can be costly due to the complex network and application interdependencies. As well as this, it is unlikely that all the application components, including third-party APIs, will be available at all times in your test environment. Some third-party components may charge on a per-use basis and test systems provided by these third-parties may not scale as well as their production-sized equivalents.

Virtualisation
The answer to these testing difficulties lies in virtualisation. Virtualised networks can simulate poor network performance by introducing latency, limited bandwidth, packet-loss or jitter (packets arriving out of sequence). Third party services can also be virtualised, allowing manual and automated tests to be repeated under different circumstances to explore the potential functional and performance breaking points for your application.

Maplin’s website did a great job of handling my poor connectivity and their systems obviously work well. Could you be sure that your systems work as well as theirs did? The chances are that if you aren’t virtualising in your test environments, you aren’t going to find problems before your users do.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.