I remember my first day as a performance tester. My mentor gave me many useful tips that day, one of which was "never record in Click And Script mode, always record in URL mode".
You see, LoadRunner has two modes it can use to simulate users on a site: A high-level mode, Click-And-Script, that simulates user journeys (click this button, fill in this form, etc), and a low-level mode, URL mode, that simulates low-level actions (send a GET request to /index.html, etc).
Now, the advice my mentor gave me was definitely good advice at the time, especially for a neophyte performance tester. However, 4 years later, I'm starting to think that Click And Script, done right, can also be a good choice.
Conventional wisdom says that URL based scripts are less likely to break if the application changes. That makes a certain amount of sense. It's well known that Cool URIs Don't Change, since users need to be able to bookmark pages.
Except that not all URIs are cool. Nobody actually bookmarks a page in the middle of a checkout process, for example. In practice, users don't especially care if the underlying technical processes have changed, as long as they can click the big green "buy" button, and buy their product. And indeed, from a user experience point of view, it's generally a bad idea to change the user journey without a good reason - look at the fuss people kick up whenever Facebook introduces a "new facebook" experience.
So if user experience doesn't change, but URLs can, then why do URL based scripts break less often?
I think part of the reason is that a lot of Click-And-Script scripts are badly written. They'll often do things like "click the 11th link on the page", because this is the quickest way to get the script working, but will break as soon as someone adds another link. However, a real user will 'click the big green "buy" button'. And indeed, my experience is that if you write scripts that act like the user (using well-chosen CSS selectors, for example), they break a lot less often.
Brittleness Can Be Good
Whilst it's sometimes handy that Cool URIs Don't Change, it also undermines URL-based scripting, in a subtle way.
Let's suppose that our developers have just added an extra step to the checkout process, between the "view basket" and "payment" pages, where the user can add gift-wrapping to their order. If we used URL-based scripting, then it's possible that our scripts will still "work". They'll jump straight from the "view basket" page to the "payment" page, because the payment URL hasn't changed (even though there's no longer a link to it from "view basket").
The "best practice" way of dealing with this is to ensure that after any changes are made to a system, the testers re-record their scripts and compare them. This is error-prone, and it makes a mockery of the idea that URL-based scripts are more stable.
By contrast, if your script looks for the big green "Payment" button, then it'll break in this situation. But that's good. It means that you know the process has changed, just by running your script.
In most software development teams, it's seen as good practice to re-use code rather than to re-write code every time (I say most, because some unscrupulous consultancies will take advantage of the fact that they're paid by the man-hour, and re-invent the wheel every time). Whilst testing is sometimes less mature than software development, testers are starting to play with code re-use in various guises (Behaviour-Driven Development probably has the most buzz in the open-source community, and tool vendors are increasingly trying to market their code re-use capabilities under various names, such as Model-Based Test Automation).
My experience is that user-action-based scripts are more amenable to re-use than URL-based scripts. For example, you sometimes find that there are two routes to the "same" page. From the user's perspective, they're the same page, but there may be hidden elements on the page, that store the story of how the user got here. With URL-based scripting, you need to think about all the possible journeys that could bring you to a particular point, which tends to limit code re-use, but with user-action-based scripting, you only need to consider what the user will do next.
I think this may explain why code re-use methodologies like BDD have had better take-up amongst functional testers than performance testers. Most functional test automation frameworks are user-action-based, so are easier to re-use code from.
I had an interesting experience with user-action-based scripting at a client. We were called in to take over maintenance of a test pack. The client wasn't using LoadRunner - they'd settled on an open-source framework, Gatling. Now, Gatling doesn't support user-action-based scripting out-of-the box, but their site had a lot of converging paths and made heavy use of hidden fields, and their existing staff were avid code re-users, so Click-and-Script seemed like a good fit.
In practice, it was actually quicker for us to implement a plugin for Gatling that enabled user-action-based scripting, than to try to bring the scripts up-to-scratch as URL-based scripts. We open-sourced the Gatling plugin.
Click-and-Script has a bad reputation, but a large part of that is due to lazy testers. A good tester should understand the capabilities of his tool, and know how to make best of its strengths, how to deal with its shortcomings, and how to avoid the pitfalls. And Click-And-Script is a useful tool to have in your toolbox.