There are many criteria allowing to differentiate load testing tools and it really depends on your needs.
This is one of the features that differentiate high-end load testing solutions. Measuring only the Web-application response time gives valuable information but does not relate it to the performance of the underlying hardware and software infrastructure. Without that information, it is virtually impossible to understand where the source of performance bottlenecks is or to evaluate the impact of higher load levels.
It is essential to allow meaningful cross analysis of data gathered from all of the components of the application. The ability to relate an HTTP request response time to a particular configuration parameter in the application server, overloaded cache, or a poorly performing database query, is an absolute necessity to efficiently “debug” the performance of a complex Web application. All enterprise-class load testing tools support add-on monitoring modules to collect end-to-end application performance data. However, only the test tools with the highest level of integration of these modules, will provide the best capability to perform cross analysis and correlation of data generated from the monitors with the statistics returned by the load injectors.
There is a lot of discussions about different deployment models : testing on your own testing lab versus testing via the cloud. Testing first your web application internally gives you the ability to find performance issues earlier in the application’s lifecycle. It is easier to first confirm the performance on your application internally before introducing additional variables by testing over the internet. On another end, with cloud load testing you can simulate large load coming from different geographical locations.
A load testing tool with both local and cloud deployment possibility is a great advantage as performance engineer can reuse script across deployment for both type of tests.
It enables monitoring of the entire Web application environment while under a load and visualizing performance data as the test is run. This is especially important for QA Personnel and Developers to get immediate feedback. Even more important, the ability to use the same tool to real-time monitor the Web application while in production (with no virtual users, or only a few, and with minimal overhead) turns your load testing solution into a sophisticated, performance debugging tool. The same scripts used in load testing can be used as synthetic transactions for response time monitoring while in production, and all the cross analysis features are available to identify the sources of performance issues (and directly compare the data to that observed in load test runs).
It helps detect trends or exception conditions that can be the cause of performance issues. This is a specialized, configurable feature that automatically combs through the massive amount of test data, analyzes it and highlights areas that may be the origin of a loss of performance. These exception conditions should include threshold on the amount of resources used (for example: disk space, network bandwidth, cache use), and trending information useful to detect problems such as memory leaks. More advanced tools will label them by severity level and suggest a course of action to remedy the problem.
Beyond an immediate increase in productivity, automated reporting–combined with customized anomaly detection, provide the ability to create customizable, reusable views. It also returns a quick, repeatable reading of any test run at the push of a button, for the benefit of a particular user. For example, a database administrator will want to quickly assess whether a poorly performing query or badly configured parameter is causing a slow page load. A powerful reporting tool with drag and drop functionality, will allow you to create and define report templates which can subsequently be used on any test result. (Figure 2). This means that you can drop your template on a test result folder and the appropriate data will be instantly extracted and automatically populate the template. The data is then saved to a report and ready for distribution. Repeat the same step using results from a different test to receive the same type of report on that data.
There is a well-documented cost in poor Web application testing, but the selection of an enterprise-class testing solution also has many cost and ROI. The licensing and software maintenance costs can vary substantially from one vendor to another, as does the scope of the solution. The value here resides in licensing only for what you really need and what you will really use. The return, however, is more difficult to assess as it involves considerations such as the proper feature mix, ease of use and installation, along with the many other benefits, which appear to reduce the total cost of ownership, but still, they are difficult to measure. Once you have determined that your test solution requires little or no training of personnel to install the software, develop test scripts and test scenarios and no outside assistance is necessary, it becomes clear that the solution you have chosen will have, in fact, a lower cost of ownership.
Therefore, the real question is not only what each solution can do, but also what would you actually do with each product. This is not only a function of the solutions offered by vendors, but is more about the match between a particular solution, your applications’ technical characteristics, the personnel using it, and the expectations of various people in your organization. One thing is sure though, the more difficult it is to extract value out of each test run, the higher your cost of ownership will be.
This post is also available in: French