Skip to main content

Why making a bad decision can be a good decision

Yesterday I read a great article on the Techwell website entitled "When Making the Worst Choice Makes Sense"The article describes how easy it is to be drawn into buying the latest gadget or fastest machine when often the second best will do. 

This all made sense to me, but it got me thinking about a presentation I did at the BCS SIGiST. The presentation was related to cloud performance testing. One of my slides described the criteria that I use when choosing a performance test tool. All the reasons below help you to open your eyes to adequate "second best" options; but often there are plenty of reasons to consider paying that bit extra as well.

When the second best makes sense

The article mentions four scenarios where second best may be the preferred option.

  1. Lower up-front cost; it may not make sense to buy a sports car for a commute in rush-hour traffic
     
  2. Integration costs; if you already use one technology, more of the same may be preferable to a complex integration with something else
     
  3. Durability; a shiny glass iPhone is lovely but don't use one on a building site
     
  4. Serviceability; buying a cheaper product with more expensive running costs or maintenance may cost more in time and money in the long run

In my presentation I talked about a list of features that I think should be considered when choosing a test tool. When choosing the test tool the most important criteria is "can it test my application?" 

Once you've answered this question you can start to consider your first-choice vs. second-best options.

When it's worth paying a premium for "the best"

  1. Lower long-term costs; The best product is unlikely to be the cheapest, but it may save money in the long run. I've worked on many projects where an inferior test tool is chosen to fit within a budget, only to see the budget blown on expensive "bums on seats" just trying to get the inferior tool to just "do the job"
     
  2. Integration costs; Teaching users a new tool isn't cheap, similarly "shoe-horning" a Windows system into a UNIX environment (or vice versa) purely to support testing, may be awkward.
     
  3. Durability; You need to be confident that our test tool is reliable and that it's telling you the truth. The tool needs to be stable, scalable and robust. Stray too far from the beaten path and you may find limited-support or leave your results open to question.
     
  4. Serviceability; Ease of use is essential. The faster that scripts can be developed, the more time can be spent testing. Testing is when defects are found and testers aren't really producing anything of value until those tests are running and analysis is taking place. The more expensive, established test tools often have greater market penetration and a higher number of testers* proficient in the use of the tool. This can reduce overall costs

*Caveat emptor - As well as a higher number of people proficient in the tool, there may also be a greater number of people claiming to be proficient (just because they heard of it).

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.