Tuesday, August 29, 2017

Best practices for Test automation | David Tzemach


There is no doubt about the importance of using automated testing during a testing process, in this article, I will review the most important guidelines that you should use to achieve the best results from your automated process.  

Write a clear and efficient code

Just like any other development project, you must write a clear and efficient code that you can run under a reasonable time, just think about the number of tests that you run even on the smallest automation project, there are hundreds and sometimes thousands of tests that you need to run over and over again, therefore, if the code is no efficient it will affect the timelines and reduce the number of executions.  

Uncertainty as the automation worst enemy

One of the main and most important criteria to determine the quality of a good automation project is the ability to provide consistent results every time we run it.

If you build an automation that passes successfully in the first run and failed in the second run, without any modification made on the application being tested, you will not be certain if the failure is due to the application (real product bug) or due to the automation framework (Any logical/code defect).

So how can we remove this uncertainty?

  1. Remove unstable tests.
  2. Do not automate unstable functionality.
  3. Make sure that each test is verified manually.
  4. Make sure that your code is up to date.
  5. Make sure that you change the testing data if requirements are changed.
  6. Make sure that you run on steadiness environments.
  7. Make sure that you are using reliable builds of the application.
  8. Make sure that each test case and code implementation is reviewed.
  9. Make sure that each new code is approved before release it to the main branch. 

Determining the Failure Gates

No matter how good the automation code is, it was designed to find defects in the tested application, as a result, we must make sure that in a case of a failure, it will be easy to understand what was the root cause of the failed test without spending a Hugh amount of time to explore the cause.

Therefore, when writing an automation, we must think about different ways to reduce the debugging time such as Automated defect reporting, Logs and any other information that will improve the debugging process.

Know when to automate

There are many keys that will determines the success of the automation project, but the knowing of when to automate is probably the most critical one since test automation is a very delicate process that can be frustrating if used at the wrong time. Two simple Examples:
  • Automation created before understanding the requirements and specifications.
  • Automation run on unstable product builds.


After the first implementation of the automation code, you should validate that the code is always updated to support new features that where added, modified or removed from the application during time.

It is crucial that the code is easy to understand and is written in a way that will allows others to understand why it was written in that way, otherwise, you will waste a Hugh amount of time in investigating the reasons and logic that was used by the original coder.

Automation reports

After every test execution, you will need to generate a report that explains the test results, to make it efficient, please follow the next guidelines:  

  • In a case of failure, a link to the debugging information collected during the execution.
  • The report is sent automatically to the relevant people.
  • Link to the labs that used to run the tests.
  • The report should be clear to the reader.
  • Details about the tests that failed/pass.
  • The name of the Automation owner.  
  • Execution Start and End time. 

Know which test cases to automate

There is an endless amount of test cases that we can run during a testing cycle, but in reality, we just cannot afford the time to write and execute them, so just like in manual testing, we will need to determine which test cases should be automated based on a few simple factors:
  • Automate the test cases that will increase the automation ROI.
  • Automate any test case that requiring an intensive precision.
  • Automate any test that is part of the regression cycle.
  • Automate test cases that you cannot perform manually.
  • Automate any test case that executed repeatedly.
  • Automate any test case that is time-consuming.
  • Automate any test case that runs per build.
  • Automate high-risk scenarios.

And what about the test cases that you should not automate?

  • Do not automate test cases for which the requirements are changing multiple times (The maintainability of such cases will reduce the automation ROI).
  • Do not automate test cases that reside under “Usability” testing.
  • Do not automate the application GUI if the objects are frequently modified.
  • Do not automate test cases that are not verified manually.
  • Do not automate Ad-Hoc random tests. 

Know the answer for “Why to automate? “

Without knowing the answer for the above question, you should not start any automation project, it’s a major key to understanding the drive to behind the decision to automate and in addition will help to reduce the expectation gap between test/automation team and management.

Automation will not fix all problems

We should always remember that the primary reason to implement an automation process in to free up the manual testers time, so they can design more complex and high-risk test cases, increase the knowledge in the application being tested and perform exploratory testing which helps to increase the confidence in the product.

Automation will not fix all problems, do not expect automation to reveal all bugs and problems, the number of bugs found by automation will always be lower that the number of defects found by exploratory and manual testing.

If you still not convinced, know this:

  • Automation will not help you if you have chaos in the current testing effort.
  • Automation testing will consume a large amount of time and resources.
  • Automation will not help you if you already behind schedule.
  • Automation testing will not replace manual testing.
  • Automation testing will not eliminate the basic testing process of Test Design, planning and writing. 

Automate only what is necessary

When designing an automation project, you should always consider the time and investment factors, therefore, adding an unnecessary test case will affect the ROI of the entire project, therefore, you should always remember that your automation should cover only the necessary test cases, there is no benefit in writing tests that will not contribute to the testing coverage.

In addition, before determining the test cases to automate, just ask yourself a few basic questions:
  • What maintenance should be performed to adjust this test in future/other versions?
  • What is the preparation involving in creating the test?
  • Is the test case can be used in future versions?
  • Is the test expected results are defined?
  • Do you have the test data for this test?
  • What is the objective of the test case?
  • Do you have the Pass/Fail criteria?

If you still have thoughts, think about the tasks and consequences involve in creating a new test case:

  • Design the test case.
  • Write it down.
  • Write the code.
  • Maintain it.
  • Review the execution results.

  • The Automation execution time is increased.
  • You will loose this time to write test cases that will really contribute to the overall testing effort. 

Independent tests

In a case of test failure, you will want to debug the ROOT cause of the problem as fast as you can, the last thing we want to do is to our time investigating numerous automation steps that can affect the failed test.

Independent tests, will allow you to understand the cause of the failure faster and will remove the dependency in another automation step. Another Hugh advantage in creating independent tests is the ability to run each test separately as a single unit without any relation to the other tests. 

You must be familiar with the application being tested

Automated testing should be treated in the same way as manual testing, the resources are responsible for building and designing the test scenarios must be familiar with the application being tested, which includes:

  • Know the technologies used to develop the software.
  • Know the main flows of the End to End scenarios.
  • Remember, there is no other way beside knowing your product, this is the only way to achieve the best and efficient testing process. 
  • Know the architecture of the application.
  • Know which platforms are supported.

Planning prior to implementation

Just like any other project, you cannot write any code line prior to knowing the different aspects that come prior to the implementation, just for example:
  • Determine the testing tools and programming language that you will use.
  • Understand the deliverables of the automation project.
  • Select the appropriate automation Framework
  • Understand the set of expectations.
  • Preparing the automation design
  • The expected test coverage. 

Build the right automation team

Selecting the right automation tool is very important, but it is more important to hire the person/team that will use it during the developing phase. My recommendations:
  • Hire an automation architect that will design the automation architecture.
  • Manual testers should focus on manual testing if you want them to automate, make sure that they are dedicated only to the automated testing work.
  • The team must include at least one authority that can lead the technical aspects and guide the other automation engineers.
  • Manual testers are not programmers, do not ask them to program without the appropriate training.
  • Make sure that your team contains enough resources to meet the project requirements.

Manual verification before automate

Before automating your tests, you must verify that each test case is tested and verified manually, if you decide to skip this phase, you will have trouble to understand the source of failures during the automated process (Failure is due to a coding problem or does it happen due to a real product bug?).

Another consequence of skipping the manual verification is that it will consume more time to stop the coder work and start the investigation about the failure if you do the manual verification, you will make sure that the coder will reduce the investigation time which allowing him to keep its focus. 

Choose the automation tool which is known to your team members

You can spend thousands of dollars to buy the best automated tool, but if your team resources are not familiar with this tool they will fail to get the most of it and probably waste more time in learning it than do the actual development.

Therefore, when selecting the automation tool, we should follow a few basic guidelines:

  • If your resources have older experience in automation tool that meets the project requirements than select it, their experience will help you to meet the targets quicker.
  • When selecting the tool, ask yourself if you can use it in future projects.
  • Select the tool that is most suitable to meet the project requirements.
  • Select the automation tool that allowing your team to develop in a programming language that their familiar with (C#, Java etc.).

Tuesday, August 22, 2017

The Critical factor of team morale in scrum teams | Supreme Agile

תוצאת תמונה עבור ‪morale booster quotes‬‏
One of the main factors of success for any team is to keep a high team morale, the team morale has a direct effect on the team performance and productivity, In Scrum team, this factor becomes even more important due to the nature of the "Self-Organized" team that is based on five main factors:
  1. Respect among the team members.
  2. Commitment to shared goals.
  3. A collaboration that increases synchronization.
  4. Increased communication.
  5. Trust.
The level of morale of the team has a direct impact on all major factors that are used to determine the level of success of the team:
  • Improved productivity.
  • The confidence of the team regarding the process.
  • The efficiency of the work based on team capacity.
  • Attention to details and increased quality of work.
  • Communication, collaboration, and team commitment.
The moral of the team is crucial for both experienced and novice teams, in most cases and based on my experience, the main cause of 90% of team failures are relevant to the morale factor, a team with low morale will have more chances to fail (Especially new teams that just started to use this methodology) than teams with higher moral that are dedicated and committed to success.

How Scrum as a framework can contribute to team morale?

Based on the Agile Manifesto, we can say that one of the main concepts of Agile is to provide value to the client, but in addition, it also describes the three C's (Communication, Collaboration, and Commitment) that make agile such a great process for teams, these three C's can have a major effect on the team morale: 


  • In scrum, the team is "Self-Organized"! Has a result the team morale should be higher due to the importance of each team member that has an equal chance to affect the team progress.
  • Due to the nature of scrum, the team members can see quick results every two weeks so a good collaboration will help the team members to gain more confidence in the quality of their work.
  • Collaboration with the customer will help the team to gain a quick feedback that will help the team to gain a deeper understanding of what is expected from them.
  • All team resources are working together for the same commitments, there is no difference between testers and developers that now can collaborate in an efficient way that will allow them to increase the quality factors.
  • There are a few scrum levels that we can use to determine the maturity of the team, as higher the maturity the higher the collaboration which will reduce the need for performing long and unproductive meetings (Planning, Retrospectives Etc.).


  • Increased communication with the customer that will help the team deliver value without failing on small communication issues that can affect the morale of the team.
  • Increased communication among the team members using specific meetings (planning, Daily's, Refactoring Etc.) that help the team members to increase the information exchange and narrow the chances for failures that arrived from synchronization issues.
  • Increased collaboration with the product owner that will increase the team productivity.


In agile, the commitment is based on the entire team members that should work together to decide the amount of work that they can deliver at the end of each iteration, this type of commitment that based on the opinion of the entire team members will help to increase the morale of each team member.

How the product owner and the scrum master can contribute to team morale?

  • Let the team members share their thoughts and provide the space for them to express their opinion. 
  • Show your trust, confidence, and faith in the teamwork.
  • Celebrate success and show appreciation for their work.
  • Be patience and understanding about their failures.
  • Provide visibility for the team members about changes that may affect their work.
  • Let the team to participate in major project decisions.
  • Support the team once, they will need it.
  • Make sure you provide a solid ground to the team members for self-improvement.
  • Make sure that the team members will have the chances to learn new things.
  • Provide the team the things that they need in order to succeed.
  • Provide challenges and narrow the routine work. 

Saturday, August 19, 2017

Risk Based Testing (RBT) | David Tzemach

To make it simple, we can define RBT as an approach were the testing on a project is done based on risk calculation. RBT is used to prioritize the tests needed to be done based on the risk of failures, the importance of the area/function and the likelihood or impact of the failure on the system.

By using Risk-Based testing we will design the project's tests after a deep analyzing of the risks related to the application under test. This approach will help us to prioritize our tests and focus the testing effort on the most important areas of the application which will allow us to reduce the major risks first. 

When to implement Risk-Based Testing

Risk-Based Testing is a great testing methodology, but when organizations should use it?
  1. Agile projects that use incremental releases and short iterations.
  2. Projects with high-risk factors (Low domain knowledge, lack of experience Etc.).
  3. Any projects with low testing timeframe and resources.  

Types of software risks

Risk Type
Project Risk
Market, legal, Resources, Timelines, Project scope, Costs Etc.
Process Risk
Engineering (Planning and Development process)
Planning, Strategy, time estimations, Quality and Development Strategies Etc.
Product Risk
Engineering from a quality perspective
Low domain knowledge, low quality requirements, complexity, coding quality Etc.

Risk Management as the bassline for Risk-Based Testing

Risk-Based Testing is based on a list of risks, but what is the process that software teams are using to identify them? Well to answer that question we should follow the three high-level phases performed during Risk-Based Testing:

Phase 1 - Risk identification

This is the first and probably the most crucial phase of the entire process, during the Risk identification process, we will define a list of Risk that may occur in the case that a specific component/Function hadn’t been tested as it should be during the testing process.

To be able to uncover the major risks, the tester must have knowledge in all requirements of the project, Process and the software that under test. In addition, the Risk identification process will be more effective once the tester has the knowledge and expertise in the technology and the environmental elements that may affect the software once it’s deployed.

Phase 2 - Risk Analysis

In this stage we need to determine the level of risk for each item in the list we prepared on stage one, the level of risk is determined by the likelihood of the risk to occur and on the impact that he has on the project.

RISK = Damage * Probability  

Damage – What is the amount of damage that this risk can do to the system, Example:

The Risk will lead to a business objective that cannot be accomplished
The Risk will affect a business objective
The Risk will have a minor effect on a business objective

Probability – What is the probability that this risk will actually occur once the system is used by a customer, Example:

There is almost certain chance that the error will occur
There is a 50/50% that the risk will occur
There is a low chance to the Risk to happened
There is a slight/minor chance to the Risk to happened

Phase 3 -Mitigation Plan

Based on the analyzed information and the Numeric values that used in the formula, a test design can now be made to remove the identified risks based on their Risk factor that was determined in the Analysis phase.  

When do we start Risk-Based Testing?

Similar to any testing technique, Risk-Based testing must be started early in the Software Development Life Cycle (SDLC), the key to success in RBT, is to successfully identify the risks early as possible to prepare an appropriate mitigation plan.

The Goals of Risk-Based Testing

The main and most important goal in Risk-Based Testing is to design and execute the testing effort with similar guidelines and Best practices used during risk management process, if we succeed to forecast the main risks of the project, we will be able to increase the pretenses of delivering a good quality product.

The Advantages and Benefits of Risk-Based Testing
Once an organization decided to use Risk-Based Testing as the preferred testing technique for a project, he can enjoy some major the following benefits, such as:

  1. Risk-Based Testing provides a proven technique to make an efficient testing process because it allowing the test team to prioritize the testing effort against deadlines.
  2. Risk-Based Testing will allow the test teams to understand when they can stop their tests (the tests are stopped once all Risks are removed based on the preliminary analysis).
  3. Using Risk-Based Testing, the organization can make a better utilization of the testing resources invested in the testing process, which will lead to shorter release cycles, less testing resources and a major reduction in long regression cycles.
  4. Once the organization is using agile methodologies that reduce the release cycles (Two-Four weeks) and as a result, the long traditional regression tests are becoming not relevant, to handle this gap we will use Risk-Based Testing as the new testing approach.
  5. Risk-Based Testing will allow the organization to achieve a better-quality due to the fact that testing is done based on Risks and not per specific functionality.
  6. Risk-Based Testing will reduce the need to run endless test cases that testing team using during regression cycles and focus on the things that really matter.
  7. Risk-Based Testing will allow the testing team to achieve a better and smarter test coverage that is determined based on real analysis of the risks
  8. Risk-Based Testing will allow the test teams to monitor and understand the status of each identified risks.

Disadvantages of Risk-Based Testing
Although Risk-Based Testing is a great method for project testing, it still has some disadvantages that we must recognize once we decide to use it:
  1. Risk-Based Testing should be made by experienced testers that understand the application and the environmental variables that can affect the software.
  2. The starting point of Risk-Based Testing is to understand and uncover the risks, which is always difficult to understand them at the beginning of the project. 

Sunday, August 6, 2017

Complete Web Application Usability Testing Checklist | David Tzemach


The main target of usability testing, is to validate that the customer will have the best user experience while working with the application. During the usability testing process, we will design and execute the tests from the user point of view and based of a few main factors that can affect the user experience.
תוצאת תמונה עבור ‪software usability quotes‬‏

There are three major factors that we need to validate during the “Usability” testing process:

The site efficiency - Just remember that users want to accomplish their goals without executing numerous complex steps, therefore the application should allow those users to perform any complex task in a few basic and simplified steps. 

The site effectiveness– To determine the effectiveness of the site, we just need to ask the simple but crucial question of "Is the site meets the user expectations?"

The user experience – what will be the user experience when using the site? A good experience and the user will return to use the site, otherwise, he will most likely refuse to use it again.


Site Design

  • Validate that when the user closes a child window, he returns to the parent window.
  • Validate that the site content does not contain grammatical or spelling errors?
  • Make sure that the site "Homepage" will create a positive First impression.
  • Validate that the site contains the company logo and contact information.
  • All buttons should be at the same standard (Size, shape, format, etc.).
  • Validate that there is a predefined selection of a radio button object.
  • Do the web pages on the site have the correct “Look and Feel”?
  • Is all field syntax (headers, Information, etc.) spelled correctly?
  • Validate that there is enough space among the site objects.
  • Validate that each web page on the site has a valid title.
  • Validate that the site text/fields are properly aligned.
  • When clicking a “Text” field, the mouse arrow should be changed to a cursor that appears in the text field.
  • Are all the site objects (Buttons, text boxes, command buttons) grouped together in a clear and logical way?
  • Validate that the user cannot edit the parent window if there is a child window opened.
  • Validate that “Disabled” fields are grayed out when needed so the user cannot use/add focus on them.
  • Are all fonts in the correct size (not too small/large) as described in the requirements docs?

SMTP Tests

  • Validate that the application supports the main E-mail clients (Gmail, Outlook, etc.).
  • Validate that your mail template is corresponding with the basic CSS standards.
  • Validate that you have a specific template for each mail that is sent.
  • Validate that the mail is sent from the correct SMTP server.
  • Validate that you have can see the user default signature.
  • Validate that E-mail contains the company Privacy policy.
  • Validate that E-mail contains the company Logo.
  • Validate that you support “Plain text” e-mails.
  • Validate the support in E-mail attachments.
  • Validate that you support “HTML” e-mails.
  • Try to send the E-mail to multiple users.
  • Validate that you record the emails that are sent from the application (Security and confidentiality reasons).
  • Validate that the user cannot send the email before validating the recipient E-mail address.
  • Validate that the Email “subject” field contains the relevant syntax (cannot remain empty!).
  • Validate that the Email “Sender” field contains the relevant syntax (cannot remain empty!).


  • Validate that amount values are displayed with the relevant currency symbols.
  • Numeric values should be aligned (Usually to the right), page text to the left (depends on the localization environment).
  • Validate that the user has the option to change the site language to support his localization attributes.

File mechanism (import/Export)

Import Files

  • Validate the upload of large files (Time to upload, how it transferred to the server, etc.).
  • Validate that the user specifies a name before starting the uploading process.
  • Validate that the file size is displayed with the original file dimensions.
  • Validate that you cannot import files without extensions.
  • Validate that the user can “Cancel” the upload process.
  • Try to use file names that contain special characters.
  • Validate that the user cannot upload files that he changed the File extension (Example: text File that was changed manually to “JPEG”).
  • Try to upload multiple files.
  • In any case of error, validate that the user received a notification that explains the cause of the failure.
  • Validate that the user can upload only the file extensions that are supported by the server.
  • Validate that the image quality that displayed after the upload is matched to the original file.

Export Files

  • Validate that the exported data are the same as defined in the application.
  • The file should be exported with the relevant extension.
  • The file should be exported with the relevant name.
  • In case that the file is exported to Excel file, validate that the appropriate values (Column names, Timestamp, Currency values etc.).
  • Export files with massive content (MAX File size).
  • In case that the file name already exists in the system, the user should get a notification that confirms the export process.
  • If supported, validate that the user can export the file to multiple extensions (PDF, CSV, Excel, etc.).

Site Multimedia/Graphics

  • Do not add any unnecessary multimedia to the site that will affect the user experience.
  • Validate that the site multimedia will not affect the page loading time.
  • Validate that the site multimedia will not reduce the download time.
  • Validate that each graphical object has meaningful to the user.
  • Validate that you use only the relevant media objects.
  • Introduce the site animation to the user.


  • Display a search results notification in any case that the search query returns ‘0’ hits.
  • Validate that the user receives the most relevant results in the top search results.
  • Provide a search advice in any case that the search query returns ‘0’ hits.
  • Provide the ability to search on a single page or on the entire site.
  • Allow the user to search based on “Case Sensitive” terms.
  • The results count should be available in the results grid.
  • Validate that the site contains a search option per page.
  • Validate the Navigation between the results page.
  • Provide the option to sort the search results.
  • Provide predefined search queries.
  • Allow search filters.

User Experience

  • Do not ask the user to perform a complex operation to use a simple functionality.
  • Can the simple user can use the system without older experience?
  • Allow users to use profiles that will help them keep their work.
  • Validate the keyboard shortcuts are working on the site.
  • Validate that the site content is organized clearly.
  • Validate that the site content is up to date.
  • Avoid opening any unnecessary windows.
  • The user should identify the site Mandatory fields, make sure that they marked by an asterisk symbol (*).

Site Performance (UE aspects)

  • Identify the site configuration that provides the best performance.
  • Identify the site recovery mechanism in any case of system failure.
  • Inform users when the requested operations will take a long time.
  • The site loading time should be reasonable to the user requests.
  • Identify the application behavior after a long period of time.
  • Warn the user when the site has any “Timeouts” states.
  • Test the site functionality with high load.
  • When the application is entered into processing/Busy mode, validate that here is a corresponding progress bar/Hourglass.
  • Reduce the download times.
  • Identify the site bottlenecks.

Site Functionality

  • The User should have the option to select only a single value in a “Drop-Down” object.
  • Validate that the user has the option to ‘Cancel’ operation prior to the update process.
  • Validate that user receives a ‘confirmation’ notification after each functional operation.
  • Make sure that the user will have the option to ‘Reset’ the changes he made.
  • Validate that each delete operation will raise a confirmation notification.
  • Validate that the site “Minimize” and “Maximize” time has no delay.
  • Validate that “drop down” values are defined in a valid sort order.
  • Validate that all input fields are tested with the boundary values.
  • Validate that you have a “Tooltip” for every field that needs it.
  • Validate numeric input fields with negative inputs.
  • Validate input fields with special characters.
  • Validate input fields with spaces.
  • Allow the user to use the Select/Deselect in a case of multiple selectable object values.


  • Does the system provides a clear and informative “Help” menus?
  • Validate that the “Help” menu is opened when user press ‘F1’.


  • Validate that the site text/fields are properly aligned to be printed properly.
  • Provide the option to print in different formats.
  • Do pages are printed without cutting the text?
  • Provide a printing option.


  • The user should have the option to return to the “Home” page from every page.
  • The user should have the option to navigate between the site levels.
  • Validate that the “Tab” / “Shift + Tab” sequence is working correctly.
  • Validate that the user receives a “Scrollbar” when the text is not fitting into the text field, or when there are too many options in a “Drop-Down” object.
  • The user should get an indicator regarding his current location.
  • Validate that the user can navigate the site with Keyboard.
  • Is the site terminology understandable for the site users?
  • Major functionalities should be available on the homage.
  • Make sure that you supply a navigation option per page.
  • Validate that you use the relevant menu types.

Compatibility Test Scenarios

Compatibility testing is used to validate how well your site is compatible with other systems which it should operate (Hardware/Software). In a web testing process, we need to validate that the site functions, Display, and behavior are kept no matter the environment that hosts it.
  • Validate the site with different network environments and architectures.
  • Validate that your site can work with different browser security profiles.
  • Validate the site against different versions of the same browser.
  • Validate the site behavior won different screen resolutions.
  • Design your site to be compatible with common browsers (Internet Explorer, Edge, Google Chrome, safari Firefox).
  • Validate that all site images, JavaScript code fonts, and strings are displayed correctly in different environments.
  • Validate the application compatibility with different hardware devices (Mac, Phones, and Tablets etc.).
  • Design your site to be compatible with common operating systems (Windows, Android, UNIX etc.).

Links/URL test scenarios

Email links - A link that is used to open a default email application (Client Side) should be open with the relevant sending address (TO Site Address).

An Internal Links - link that is used to point to an internal site pages/forms (Help, Navigation Links, Home Page, Legend, About, Contact, etc.).

Broken Links – Links are not linked to any page (Internal/External), usually caused by a spellcheck issues or a referenced location that is no longer available.

External links – Links which are pointing to external websites that are not related to the site under test.
  • Validate that the baseline site is available after the user use a hyperlink.
  • Make sure that you validate both the “Internal” and “External” links.
  • Validate that every hyperlink is marked so the user can find it (Links should be differentiated from a regular text on the same form).
  • Validate that you can open the link in a new tab/New window.
  • Validate that the site is configured with appropriate links.
  • Validate that a hyperlink to an “E-mail” address will open a corresponding e-mail application (When supported).
  • Check rather you have broken links.
  • Validate that when a user uses a hyperlink, the referenced location is open under reasonable timeframe.
  • Validate that every part of the site that is “referenced” based, as a corresponding hyperlink.
  • Validate that the site links are highlighted when a user is placing the mouse pointer on it.
  • Validate that your tests cover the “navigation” links between the website different pages.
  • Validate that a site link takes you to the specified location that describes in the link name.
  • Validate that all the site download links point the user to the correct host location.
  • Validate that the user receives a valid notification when the link is broken.

Error handling 

  • Validate that the error notification is displayed at the correct position.
  • In a case of an error, validate that the relevant fields are highlighted.
  • Validate that all error messages are correct and informative.

My Presentations