It crossed my mind that you should regularly check that your customer feels like they are getting value for money. For testing however, how do they know what is value for money? On the one hand they can compare service, people and costs with the market and get a view. But thinking more fundamentally, how much should you be spending on testing?
For functional testing there are some good metrics available. For example: the number of defects found in each project phase; unit test; system test; and live etc; the relationships between each other and the ‘size’ of the application.
Non-functional testing should be about risk mitigation. Specifically it is mitigating risks that are typically low probability but high impact. For example, ‘There is a risk that our website will not be able to cope with peak loads and leave potential customers unable to do business with us’. Non-functional ‘defects’ are therefore thankfully rare.
This is reassuring but it means that metrics become statistically meaningless. You find yourself doing a lot of testing and finding very few problems, but the ones you do are potentially catastrophic.
I do not have a succinct answer but feel it lies in; quantifying the risk; giving the probability and the impact realistic numeric values; then multiplying them together and balancing the product with the preparedness or otherwise of the business to take risk.