Tooling: APM and Performance Testing
Tooling for application performance testing and application performance monitoring, APM, are frequently mixed up. To correctly position them it is important to distinguish the stages of an application’s lifecycle. Of course one may see it any other way, but I like to split it first in pre-deployment and post-deployment. The pre-deployment being divided in 1. Requirements (-specification), 2. Architecture / (high level) Design, 3. Construction and 4. Implementation. In Agile application development these stages pass repeatedly in short cycles.
In the pre-deployment stages we try to foresee (or predict) the performance of an application. In that stage intelligence about how the end users consume the application and the performance of the application are not immediately available. Predicting application performance is done on two tracks: the user behaviour is predicted and cast in a load model, whilst the application performance is assessed via LST (Load and Stress Testing) or its model driven counterpart.
In the post-deployment there is monitoring. The way the application is used by the end users as well as application performance can be observed with application performance monitors. A few decent application performance monitors saw daylight in the last decade. They not only produce meaningful statistics, but also offer the capability to quickly find the root cause of a performance problem. Many others did (or still do) produce a lot of figure porridge and meaningless statistics but hardly any insight.
Since load & stress testing tooling produces hardly any application intelligence, APM is frequently applied in combination with it to fill that gap. By businesses that can afford it. As a performance tester you run your tests and pretend that it is production, use your application performance monitor and voila.
Performance testing is traditionally done in a waterfall fashion in the implementation stage of the application. Though this is not a very favourable method it seemed the only way. Disadvantages are the long feedback cycle to the developers of several months. I do not know if I am exactly exemplary for others, but after two weeks I start to forget how I constructed a program. When I have to change it later, it takes a while before I understand my own code again and can apply the change. Moreover the changes a performance test advises may require a complete redesign. It will not happen very often, but when it happens on a mission critical application, time to value is important and the delay may cause a lot of financial damage to a business. In light of increasing use of short cycled software development methods and continuous delivery, Performance Testing faces the challenge to move backwards to the Construction stage. Finding the right method and tooling is not so easy.
Performance testing is transaction aware by nature. I.e. the performance of each individual transaction type of the application is assessed under varying conditions. This is not the same self-evident for APM since our IT industry ignores the concept of Transaction a great deal.
It is important to observe the pre- and post-deployment stages of an application in performance engineering when choosing the tooling. In order to be in control of application performance we apply performance testing in the pre-deployment and APM in the post deployment.