Saturday, February 20, 2016

How can we define a successful performance testing project?

Overview

One of the most important things in every project is to define the success/failure criteria, otherwise how can we determine if the process delivered the expected results? How can we learn from a failure? Was the process efficient enough?

So what is the main criteria that we can examine to evaluate a performance testing process? There is no specific answer, every project has its own targets and expectations, but based on my experience, there are few mandatory criteria that we always need to consider when evaluate the performance testing effort.


You succeeded in improving the current performance

So you conduct your tests, spend thousands of dollars, but the application behaves the same way as it was prior to starting your tests.  If that’s the result then we totally failed, but if we succeeded to improve the performance of the application then we can say that we achieve one of the main goals that we start the test in the first place (Think about it, do you want to start a costly process without improving the actual product?).


You succeeded in finding the application bugs early

Well, you all know that bugs are cheaper when you find them on the early stages of the testing process, this fact is relevant to performance tests as it’s relevant to any other testing process. Based on this we can say that finding bugs early is a huge consideration when determining the quality of our testing efforts.


You can deliver documentation that reflects the process

Like any other testing project, we always need to ask ourselves a simple question “Did we spend the time we had in an efficient way..?”

Well, to answer this question we need to examine how it’s actually spent during the project, the best way to accomplish this task is to examine the documentation that we create and used during the process.

Examples:
  • The Test strategy that we used.
  • STP.
  • STD.
  • STR.

You know the numbers instead of assumptions

One of the biggest problem that we have in the “Non-Functional” side of testing (and in the performance world in particular) is that we need to take a lot of assumptions on “How” the system will react in some situations, there are a lot of reasons that cause this, but the truth is that sometimes you just can’t get the expected outputs as you have in the “Functional” side of testing.

Therefore, I can say that based on my experience, a true success must include “numbers”, every assumption that you have prior to the tests should be translated into numbers, that you can analyze and examine in different phases of the project.

You create a baseline for future project

As I already told you, performance tests will consume a huge amount of resources, time and money, a great success factor is achieved if you can use the current testing on future projects, it’s like “recycling”, think about this for a sec, each performance tests will lead to further costs on Hardware, Software and testing tools. If you succeed to maintain this objects, you can use them on future projects and reduce the costs.

Furthermore, another major issue is the baseline that you achieved in the current testing cycle, this baseline will be a great start to any other performance projects because you have a baseline that you can compare the execution results, understand the differences between versions and save a huge amount of time when you need to estimate the time for each tests  .

No comments:

Post a Comment

My Presentations