Saturday, February 20, 2016

How can we define a successful performance testing project?


One of the most important things in every project is to define the success/failure criteria, otherwise how can we determine if the process delivered the expected results? How can we learn from a failure? Was the process efficient enough?

So what is the main criteria that we can examine to evaluate a performance testing process? There is no specific answer, every project has its own targets and expectations, but based on my experience, there are few mandatory criteria that we always need to consider when evaluate the performance testing effort.

You succeeded in improving the current performance

So you conduct your tests, spend thousands of dollars, but the application behaves the same way as it was prior to starting your tests.  If that’s the result then we totally failed, but if we succeeded to improve the performance of the application then we can say that we achieve one of the main goals that we start the test in the first place (Think about it, do you want to start a costly process without improving the actual product?).

You succeeded in finding the application bugs early

Well, you all know that bugs are cheaper when you find them in the early stages of the testing process, this fact is relevant to performance tests as it’s relevant to any other testing process. Based on this we can say that finding bugs early is a huge consideration when determining the quality of our testing efforts.

You can deliver documentation that reflects the process

Like any other testing project, we always need to ask ourselves a simple question “Did we spend the time we had in an efficient way..?”

Well, to answer this question we need to examine how it’s actually spent during the project, the best way to accomplish this task is to examine the documentation that we create and used during the process.

  • The Test strategy that we used.
  • STP.
  • STD.
  • STR.

You know the numbers instead of assumptions

One of the biggest problems that we have in the “Non-Functional” side of testing (and in the performance world in particular) is that we need to take a lot of assumptions on “How” the system will react in some situations, there are a lot of reasons that cause this, but the truth is that sometimes you just can’t get the expected outputs as you have in the “Functional” side of testing.

Therefore, I can say that based on my experience, a true success must include “numbers”, every assumption that you have prior to the tests should be translated into numbers, that you can analyze and examine in different phases of the project.

You create a baseline for a future projects

As I already told you, performance tests will consume a huge amount of resources, time and money, a great success factor is achieved if you can use the current testing on future projects, it’s like “recycling”, think about this for a sec, each performance tests will lead to further costs on Hardware, Software, and testing tools. If you succeed to maintain this objects, you can use them on future projects and reduce the costs.

Furthermore, another major issue is the baseline that you achieved in the current testing cycle, this baseline will be a great start to any other performance projects because you have a baseline that you can compare the execution results, understand the differences between versions and save a huge amount of time when you need to estimate the time for eachtest .

Saturday, February 13, 2016

What is a “Test case” in software testing?

A test case is a set of conditions that executed by the tester to determine whether the tested system is operated based on the software requirements and specifications. 

A good design of test cases should help the testing team to find and remove many logical errors in design, in addition, a good and effective test case should be based on a relevant “Use case”.

Few comments that simplify things:
  • Test cases should give confidence that a specific functionality is working as designed.
  • Test cases are helpful when you need to analyze and determine the project risks
  • The same test case should be re-tested with different inputs (Positive/Negative).
  • Test cases are the way to measure the implementation and testing coverage.
  • Test cases are always helpful when you need to provide time estimations.
  • Test cases should always contain a specific input and the expected result.
  • Test cases are always helpful when you need to provide time estimations.
  • A test case should base on the software requirements and specifications.
  • Test cases are the way that testers can make the “Validation” process.
General categories of a “Test” case structure  


  • Test status (Pass, Fail, in progress, Block other test, Blocked by other test/Bug)
  • Test summery (short description that describes the test and his objective).
  • Prerequisites that should fulfil before the test execution.
  • Test category (Performance, usability, GUI ).
  • Test owner - the tester that going to run the test.
  • Test identification number - the test case ID.
  • Dates (created, Modified and closed).
  • Expected results AND actual results.
  • Based on Requirement ID?
  • Any automation coverage?
  • Based on Use case ID? 
  • Execution estimation.
  • Test environment.
  • Detailed steps.
  • Test Priority.
  • Test Inputs.

How to design a great test case


  • Test cases should be readable by other people (Project owner, management, testers).
  • Test cases should be written with the goal to find software errors.
  • In my opinion, a good test case is one that can be automated.
  • Test cases should write to provide testing coverage.
  • Every step should have the corresponding results.
  • The test case is logical and easy for execution.
  • Designed based on use case / Requirements.
  • The test case should be highly detailed.
  • Each test case should test a specific functionality, multiple test cases can always be combined to create a test procedure.

The information that you can get from a detailed test case documentation (STD)? 

  • The deviation/correctness against the original requirements and specifications.
  • The quality of the application, based on the current testing coverage/results.
  • All the relevant data that needed for “Risk Management” process.
  • How much testing is still needed to end the current testing cycle?
  • The number of errors that raised based on those test cases?
  • Is the software is ready for Beta/Alpha/Acceptance testing?
  • The Stability of the system against different architectures.
  • The current coverage against the expected timelines?
  • All the relevant data that needed for “Risk analysis”
  • How many use cases are tested and covered?
  • How many bugs are found in specific areas?
  • Is the current testing coverage is enough?
  • Is the software ready for automation?
  • The quality of the test designee?
  • The quality of the code?

My Presentations