Monday, August 27, 2018

What do testers in agile teams do when there is nothing to test? | SupremeAgile


In the last eight or so years I have been involved in more than 50 agile transitions as an implementer, trainer or consultant. Each transition encounters its own challenges and complexities, which of course has led to many questions from the business and agile teams.

One question that keeps coming up in almost all agile projects is: "What should testers do during the sprint when there is nothing to test?" This issue is often encountered. Think about the beginning of an iteration, when developers start coding, but there is no build that they can deliver to test in the first few days (or more) of the iteration.

Well, I think that in the current software industry testers should know how to code in a way that allows them to add more value to their teams and stay relevant. (For more information, please read my previous article here.) Unfortunately, in many cases, testers have neither the knowledge nor the skills or willingness to make this transition.

In this case, I will say that the tester should be preparing for tests at the user-story level, which include the test strategy and test methods to be used. Additionally, they should prepare test environments, so when there is available code to test, the test process can start immediately.

During the planning meeting at the start of each iteration, the team commits to the stories they intend to deliver at the end of the iteration. As part of this commitment, the team also estimates the work and starts breaking down stories into smaller tasks that they will use as part of their day-to-day activities.

Let’s focus on the task-breakdown process. During this process, the team usually tends to work on programming tasks. However, there are many other non-programming tasks that the team needs to accomplish during the iteration. Now, if the team spends time trying to identify and estimate each one of the non-programming tasks during the sprint planning meeting, chances are that the tester can contribute quite a lot, even if he does not have the necessary coding skills and there is no testing to do right now.

Let's see some examples of the non-programming tasks that often need to be done during the iteration:
  1. Break down user stories into tasks.
  2. Talk to external resources that can help the team (such as designers and tech experts).
  3. Help the team handle impediments.
  4. Writing test scenarios.
  5. Investigate new automation tools.
  6. Set up a test or development environment.
  7. Improve the development and testing processes.
  8. Improve the build-creation processes.
  9. Document checklists, health-checks, release notes, etc.

The tester as bottleneck

Up till now, we reviewed the common situation where the tester does not have anything to test during part of the sprint; conversely, the tester may become the bottleneck of the team. Think about the following scenario. A few days prior to the end of the sprint the tester receives stories to test without having a real chance to test everything. What do we do now? Well, we can remove this bottleneck by involving all team members in the testing process. The tester decides which tests to do himself, and which tests should be delegated to the rest of the team. That's what cross-functional teams are all about!

Test Driven Development (TDD)

If the team is using TDD as their preferred software development process, it means that team members spend more time writing test code from day one. In this case, what value does the tester bring? Well, the tester should pair-program with the developers that are writing the test framework for their code.

If the tester has the coding skills he can do much more in TDD by helping the team write the tests, but if that's not the case, he should still pair-program with developers to suggest tests for better test coverage.

If the team is not using TDD, or if they've already written all the test cases that will be implemented by developers using automated tools, the tester should add value by simply doing whatever he can to help the team achieve the iteration goal, just as we’d expect to see from any other team member. 

Thanks to Tally Helfgott for proofreading :) 



Linkedin Profile

Tuesday, August 21, 2018

What can we automate in an agile environment? | Supreme Agile


Most types of testing benefit from automation, starting from basic unit tests all through system tests. As you know, unit tests don’t supply enough coverage to catch system regression failures. But running a set of manual tests before every check-in, which could be dozens of times a day, just isn't practical.

And this is a problem—a big one. Because when developers can't run tests by simply pushing a button, I can assure you that they will most likely not be motivated to run tests at all.

Let's talk about the kinds of tests that are most suitable for automation. Automation starts with an automated framework that allows developers to check-in their code often and receive quick feedback about its impact. So let’s start with this first. 

Continues Integration (CI) system

This is where we create the most important, albeit simple and logical, a ground rule for automation: due to the nature of agile software development, which is faster than any other approach, agile teams should focus on automating any repetitive or tedious work involved in developing software.

And no candidate is better for this than automating build creation as part of an agile development process. Due to the fast nature of agile development, the team should be able to create numerous builds per day, especially to test newly added code.

CI systems are crucial in an agile environment. Continuous integration and build processes are the two systems that give that greatest ROI of any automation effort:  
  • CI allows for immediate feedback at the unit-test level (in case that you have the relevant unit tests to support it).
  • It reduces many of the risks involved in adding new code without testing it.
  • It allows the team to create and deploy numerous (stable) builds and allows for multiple check-ins per day.
  • It improves communication because team members can receive a notification once the build is ready without needing to check the status.
  • CI systems speed up testing time by reducing the number of errors at the unit level before these errors become apparent in advanced phases of the testing process.

Based on the above, agile teams must implement continuous integration and build the framework as soon as they can. Although it requires continual maintenance, it’s the only option for agile teams to succeed and reduce technical debt in large complex projects.

Based on these simple facts, you may see that agile teams must implement continues integration and build the framework as soon as they can. Although it requires continual maintenance, it’s the only option for Agile teams to succeed and reduce technical debt in large complex projects.

Development and Test Environments

Agile teams need to test and develop in a fast-changing environment; as a result, there is less room for the creation and maintenance of work environments. Agile teams can use automated deployment of their environments without multiple hours of manual work.

In addition, the team can use automation to handle many other areas related to their work environment:  
  • Creation and cleaning of the testing data and configuration.
  • Setup of specific topologies and architectures.
  • Simulating a specific scenario for reproducing a bug. 

Testing of the User Interface (UI)

The agile development process embraces the approach that the team must deliver an incremental working functionality at the end of each iteration; as a result, the team will usually execute basic automated regression tests at the GUI level.

As I mentioned earlier, I'm a great believer in automated testing, but in some cases, we really need to think about whether we want to use it, especially when we want to test the user interface of an application whose GUI changes.

To overcome the challenges of GUI testing, there is great importance in selecting the most suitable tool for the job, one that’s easy to maintain and flexible enough to absorb changes. This is probably the most important key to successful GUI automation.

Testing all layers of the application

I'm a great believer in automated solutions that can reduce manual testing efforts to the bare minimum necessary. It starts at the first layer of the application, by running unit tests that we all agree is crucial in reducing problems that when found in that layer of testing won’t become a bigger problem later.

Next, we have the second layer of component tests. Programmers test components as a whole module by using a set of inputs and outputs that the module produces. The third and, for me, the most crucial part in the testing strategy, is integration tests, where modules are tested together as one suite. And if that is not enough, why not test the whole system by running the fourth layer of system tests, which test the entire application as a whole system.


Performance, load and Stress tests

If you’ve ever been involved in the testing process that included one of the testing types mentioned above, you probably know that it’s almost impossible and certainly ineffective to use manual testing methods as the preferred way to run them. Furthermore, there is a wide range of tests that you simply cannot run without automation tools.

In addition, using manual tests will not provide the accurate test results we can achieve by using dedicated automation tools that can simulate the exact scenario without any human interference that may affect the testing process and therefore the results.

Thanks to Tally Helfgott for proofreading :) 


Linkedin Profile

Saturday, August 11, 2018

What shouldn't we automate in an agile environment? | Supreme Agile

If you're familiar with my blog you already know that I'm a great believer in automation frameworks that allow the team to create a more productive, efficient and reliable process of coding and testing. However, some aspects of the testing process still need human eyes, common sense, and intelligence.


Usability Testing

Usability testing is very different from other test types that determine the quality of the software. As such, it cannot be automated because it requires someone actually work with the software to determine how it was experienced and where the gaps are in the user experience.


GUI Testing (Is it worth the ROI?)

GUI testing is one of the most difficult areas to automate. I’ve seen just too many organizations that invested thousands of human hours to automate the GUI of their products but at the end found it was a waste of time that didn't really provide the expected ROI.

Some GUI tests can be used just to make sure there are no unexpected changes in the GUI, but you should ask yourself whether it’s worth the costs and investment that can go instead to improving other quality issues that will provide a better ROI and reduce the risks in that area.

Tests that are not worth the investment

I used to be involved in an automation project where the test team automated a thousand tests that provided (at least on paper) great test coverage. So what is wrong here? They automated almost all the available test scenarios without really thinking about the ROI.

The team invested weeks on automating tests that were marked as low risk, tests that would never fail, and tests whose failure had a very low chance of impacting the software. The entire automation process was based on the spirit of "let's automate everything" instead of asking the simple question of what is the ROI that this automation project provides.

In some cases, there are tests that are written without real thought as to whether they are important or not, and once the automation project starts, the team will automate these tests just because they never want to run them again (because they know that there is 0% chance that it will actually make a difference). 


Tests that need to be executed only once

The main goal of automated testing is to allow the team to focus on the things that are really important in the software development lifecycle. Automating test scenarios that will run only once are not worth the time that the team needs to invest in the design, creation, and execution of these tests. 

Exploratory Testing

In my opinion, exploratory testing is the best and most efficient method that agile teams use in any testing process. Exploratory testing can be used for learning purposes (you learn more about the software when actually testing it) or to provide a fast way to evaluate the overall quality of the software. However, when it comes to real testing effort, a skilled tester is required to design and execute tests.

Exploratory testing should be done by humans and not by automated scripts because automated scripts will not let the tester take in new information he generated from the exploratory session and use it to improve future testing and development processes.

In addition, although exploratory testing is a great testing approach, there is a real need for automated tests that will allow the team to focus on their exploratory sessions without worrying about any regressions that automated tests should cover.   

Thanks to Tally Helfgott for proofreading :) 


Linkedin Profile

Thursday, August 9, 2018

Cloud testing: the right way to do it | Supreme Agile

In this article, I will review some important insights on cloud computing. In order to achieve the most from this article, we need first to understand the cloud-computing concept, and how the cloud is different from any other infrastructure.

The basics of cloud computing

Cloud computing is the result of one of the biggest and most important revolutions we’ve witnessed in the software industry, the technology known as virtualization that has changed how organizations around the world manage their computing resources.

This advanced technology creates a completely new methodology of how organizations can share computer resources across multiple systems in order to reduce costs and deployment time, increase scalability, and facilitate the IT department in managing their infrastructure.  

The virtualization technology becomes even more important once evolved in the new form of cloud computing. Cloud computing is an internet-based platform that uses the virtualization technology and its various computing services like hardware, software and any other computer-related services that provide a total solution of resources based on demand across the internet. 

To summarize it, let us review the main points that you should keep in mind what is cloud computing:
  • Cloud computing is a general term for the delivery of software/services over the internet
  • Enables companies/people to consume on-demand resources, such as a virtual machine, storage, and applications.
  • Cloud allows access to services without the user need to have technical knowledge or control of supporting infrastructure
The cloud structure is based on three types of delivery models (aka components) that provide the “as a service” solution: 


Infrastructure as a service (IaaS)


This is the fundamental layer of the cloud solution. It focuses on physical resources such as computing services, networking, and data storage space. IaaS resources are usually billed on-demand based on customer usage.

Examples:
  • Microsoft Azure
  • Google Engine (GCE)
  • Amazon Web Services (AWS)

Platform as a service (PaaS)

This is the second layer of the cloud solution, which provides organizations with a platform with the main advantage of removing their need to manage the underlying infrastructure. An organization that uses this layer will not need to worry about resource procurement, capacity planning (you can simply set it to grow dynamically as long as you have the budget) and the maintenance of both hardware and software.

Examples:
  • Google App Engine (GAE)
  • Apprenda
  • Amazon E2C

Software as a service (SaaS)

The top and the most cost-effective layer of the cloud platform, which provides a complete product, is often referred to as an end-user application, run and managed by the service provider. In this layer, applications are available to the end users on demand via the internet. Using SaaS, a customer can access their applications without installing the software on a personal device (workstation/server). The entire processing effort is conducted on the vendor’s datacenter.

Examples:
  • Salesforce
  • Google Apps
  • Gmail

Types of cloud

There are three types of available cloud formations: public, private and hybrid.

Characteristics
Public
Private
Hybrid
Description
Services are available to everyone, resources are allocated and consumed dynamically as per the tenant request
Managed under the security restriction of a particular organization and available to this customer only
Mixture of both public and private clouds. The mixture depends on the organization's decision on what services to expose to all and what services they want to expose to specific users.
Ownership
Owned and operated by a service provider
Owned and operated by the organization's IT
Allows IT organizations to become brokers of services

Scalability
Very high
Limited
Very high

Security
Depends on the security measures of the service provider
Most Secure, all storage is on-premises
Very secure, integration option add an additional layer of security

Connectivity
Connectivity over the internet
Connectivity over the internet, fiber and private network

Combination of both
Technical Difficulties

Technical knowledge required

You get the Basic setup but still, the knowledge of the subject is required.

Customers do Not need to worry about technicalities; The SaaS provider company handles everything.

Cloud concerns

Cloud is a great technology that has already started to change the industry and the way companies manage their data. However, the world is not perfect and there are still some concerns that we should take into consideration:

Interoperability – A universal set of standards and/or interfaces have not yet been defined, resulting in a significant risk of vendor lock-in.

Latency – All access to the cloud is done via the internet, introducing latency into every communication between the user and the provider.

Regulations – There are concerns in the cloud computing community over jurisdiction, data protection, fair information practices, and international data transfer—mainly for organizations that manage sensitive data.    
    
Reliability – Many existing cloud infrastructures leverage commodity hardware that is known to fail unexpectedly.

Resource control – The amount of control that the user has over the cloud provider and its resources varies greatly between providers.

Security – The main concern is data privacy: users do not have control or knowledge of where their data is being stored.

Cloud testing 

Cloud testing refers to testing of cloud resources (both hardware and software) that are available on demand. Cloud testing must be conducted to ensure that the product under test meets both its functional and non-functional requirements.


SaaS Software Development Lifecycle (SSDLC)

      Requirements - Gathering and prioritizing business needs/stories by the customer/PO for the product as well as capturing them in a central location.

Design - Building a technical blueprint of how the proposed system/feature/model will work. It includes elements such as system features, models, technical architecture, integration points, interfaces, UX, etc.

Development - This is the physical building and coding of the product’s features/model including database based on the design and requirements.

Testing - Verifying the feature/component of a product works as expected and meets all of the business requirements. It also includes writing test conditions and executing test scenarios.

Go-live & maintenance - Implementing the feature/component of the product in the production environment as well as the day to day maintenance of the application (including updates).

Types of cloud testing 

There are four different types of cloud-based testing. Each type has its own objectives.

Testing SaaS in a cloud (testing an application) - This type of testing is used to validate the quality of the application in the cloud. Functional and non-functional requirements of the particular application are verified.

Testing of a cloud - The cloud is tested as a whole entity and based on its functionality. This type of testing is used to validate the quality of the cloud from an external (end users) point of view (its capabilities and service features).

Testing inside a cloud (infrastructure testing) - This type of testing is carried out by the cloud vendor and checks the quality of a cloud from an internal view or feature, based on the internal infrastructure and capabilities of the cloud (e.g. automatic capabilities, security, management, monitoring).

Testing across clouds (services testing) - Testing an application is done over various clouds (private, public, and hybrid). It is based on application service requirements.

Cloud testing Enlivenments:

There are two types of cloud testing environments that can be used by development teams  for testing activities:
  1. A test lab that simulates a cloud-based environment, where the application is deployed and tested.
  2. A hybrid, public or private environment, where the application is deployed and tested as it will be available for the customers.  

Challenges of cloud testing

Quality control - How do we maintain quality products in an area that demands fast, high turnover of deliverables with no bugs? This is the world of cloud, which can be very complex for those who do not invest the time to learn it.

Data security and privacy - One of the biggest advantages of cloud infrastructure is multi-tenancy support. Although multi-tenancy support is great, there is still a major challenge to ensure that the customer’s data is not compromised, security standards are applied and the privacy-related regulations are enforced. 

Upgrades with a short notice period - Cloud providers give existing customers a very short notice prior to upgrades. This is a big problem when manually validating changes to your SaaS application and is another major consideration when thinking about conducting manual testing in cloud projects.

Data migration - Data migration, the process of moving customer data from one cloud provider to another. During this process, the risk increases dramatically because both providers involved must ensure that the data is migrated without losing any critical data.

Upgrade testing - Cloud testing’s biggest challenge is to ensure live upgrades do not influence the existing connected cloud users. Think about a multi-tenant environment that uses the same cloud environment, when the application is upgraded for a specific customer. Sound simple? Unfortunately, in some cases, the upgrade process may influence the user experience of the other tenants due to latency, networking issues, and their shared resources.

Bugs - Bugs are no longer isolated; once seen they can be seen by all and exploited.

Frequent releases - Frequent releases provide less time to run tests, less time for regressions and as a result more unexpected defects and higher risks.

Cloud testing vs. conventional testing


Test parameters
 Conventional testing
Cloud testing
Cost
High costs due to major
investment in hardware 
and software
Lower costs, payment per use of the cloud services.
Test environments
Test labs (pre-fixed and
configured test environment)
An open public test environment
with adjustable resources
Impact of bugs
Bug isolation and low visibility
 (per customer)
Each bug is a bug for everyone. Fixing a problem for one customer fixes it for everyone
Security tests
Tests are done based on server type and policy of the organization
Testing is done in the vendor’s cloud-based configuration.
Performance, load,
and scalability
testing
Performed on a fixed, isolated
test environment
Performed on both real-time and virtual online test data.
Time to delivery
Internal software releases
once every 1-12 months
Internal software releases 
multiple times a week
(sometimes even more).
Monitoring and 
support
Reactive software monitoring
(downtime reported to
customers in hours, days )
Proactive software health-monitoring (downtime reported to customers in
seconds, preventive actions
taken at defined procedures)

The main types of testing performed in a cloud environment

During cloud testing, teams must validate that their tests cover both aspects of functional and non-functional testing. Let us review some of the more common test types that are part of a cloud testing project:

Disaster recovery (DR) testing – The cloud as a service must be available to customers at all time, therefore, it’s important that a replicated site will be available in case of critical failure. While executing DR tests, the team must ensure that the app can recover in case of a massive failure(restore to the last available point, no loss of data, minimum downtime, etc.).

Availability testing – This type of test is usually owned by the cloud vendor that ensures that the cloud is available to customers at all time without any downtime.

Capacity testing – This verifies that current and future hardware supports expected usage as determined by the specification of the product (such as adding or removing resources to or from a customer).

Multi-tenancy testing – This type of testing is very important in any cloud-testing strategy. During these tests, the cloud services are tested by multiple users from different tenants (each service can serve multiple customers). Testing must be performed to guarantee there are no security incidents such as access (control or data leaks) and that there is no degradation in performance once multiple customers access the same service.

Functional testing - This tests the app delivers the required functionality.

Reliability testing – To ensure that the app is capable of performing failure-free for a specific period of time in a specific environment.  

Security testing – As discussed earlier, the cloud environment provides access to multiple customers who can use the same services. As a result, we must ensure that there is no unauthorized access to the data within the SaaP application, no privacy leaks, and that customer data integrity is kept under strict security gates.

Common test Guidelines:
  • Validate that data integrity is not compromised by unauthorized access
  • Validate that only the authorized customer can access the data
  • Validate that data migration is made through secured (encrypted) channels
  • Validate that all user data is removed in case of dropping the service
  • Validate that only the relevant ports are opened
  • Validate that there is a clear separation between tenants

Scalability testing – Cloud services are relevant to both small and large organizations; as a result, there must be tests to ensure that the business can scale up or down its resources based on the customer’s need. 

Load and stress testing – To identify the stability of the system beyond its operational capacity to see how it reacts to different loads. 

Live upgrade testing – To ensure that we can deploy new versions on the cloud without affecting customers’ user experience.

Performance testing – To ensure that the SaaS application can manage different traffic loads that depend on the number of customer requests. The main factors that we want to validate in this type of tests are network latency, the response time of the application and the workload balancing (NLB) in case of massive use. 

Common test guidelines:
  • Response times should not be affected due to the actions of other tenants
  • Failures in one tenant should not affect other tenant performance  
  • Scaling process should not cause any degradation in performance factors


Thanks to Tally Helfgott for proofreading :) 


Linkedin Profile

Wednesday, August 1, 2018

Barriers to incorporating automation in agile teams | Supreme Agile

As I’ve written numerous times, agile teams must automate many aspects of their work, such as testing and build creation, as well as any other process whose automation would save time for the team.

However, despite it being clear to me that it’s very logical for agile teams to use automation for any product backlog item (PBI), I’ve come to the conclusion that it's simply not as obvious as I thought. As a result, I decided to write this article about the main reasons agile teams fail when trying to implement an automated solution. 



Developers’ attitudes towards automation

To understand developers’ attitudes toward automation solutions, we need to look at how they see automation in traditional environments, where separate QA teams do all the testing work without involving developers in the testing process. The direct result of this environment leads developers to become less involved in the testing process (why bother testing if there are dedicated QA teams to do the work for us).

In addition, the waterfall development process is built with different phases for development and testing, which make testing even more remote to developers who do not need to do much after their phase is done and they have already moved on to the next project.

In an agile environment, there is no separate QA department that provides a safety net for developers. The agile development team must ensure the quality of the product, and developers must become involved in all aspects of the testing process, from the unit test level till system testing.


Developers must change their attitude, mindset, and culture to allow the team to succeed in an agile working environment. Developers who fail to do this will impede the team in being able to deliver on their commitments.

Unrealistic maintenance costs

The main goal of automated frameworks is to boost the team’s ROI while increasing the efficiency of the test execution, build creation and overall process. So if we think about it, one of the main goals of using automated frameworks is to free engineers from performing manual work so they can focus on other aspects of the project.

But what happens when the team selects the wrong automated framework and uses poor test design that consumes most of their time in maintenance and stabilization of the written tests? This leads to an unrealistic situation where the team spends hours and days of manual work on frameworks that were supposed to free up their time.

A classic example of this scenario that I often see is agile teams that do not want to spend time on user-interface (UI) testing. The first thing they do is to develop or purchase a third-party vendor capture tool to record their tests, expecting it to solve all their automation problems. Well, this will not work. The creation of thousands of lines of UI test scripts that (usually) don’t follow code practices will create a situation where, over time, no one knows what they are supposed to do or why they were written at all. This leads to unrealistic maintenance time on tests that are maybe no longer relevant.

To overcome this barrier, the team must choose the right framework for the automation they want to achieve. Someone who is capable of seeing the big picture must invest time in great test design and the future roadmap, and the team must use the relevant code practices that will make the code readable and easy to maintain over time.


Previous bad experiences

One barrier that is very common among engineers is actually very basic and that is a previous bad experience in automation projects that didn't pay off.

There are many challenges in automation projects and therefore more opportunities that can cause teams to fail, such as poor design, unstable automated frameworks, and many others. In this case, the organization must analyze the reasons for the failures and ensure they will not recur in future projects. 

Legacy code

This is a simple fact: most engineers prefer to write their own code, rather than getting legacy code that was written by other programmers. When writing automated tests, we have another layer of testing, which makes it even harder for a developer to succeed with the project.  

The first barrier is the code itself. If a developer needs to work with code that he didn’t write or participate in the design of, it may be hard for him to understand the code itself and what tests should be created to provide good test coverage.

The second barrier is legacy code that isn't designed for testability, which makes it almost impossible for an engineer to create automated test scripts without needing to refactor the legacy code.  

Letting testers do the work

If I need to select one reason that will lead to failure again and again, I would probably say that it is letting testers write the automated tests when they do not have the necessary knowledge and experience in the coding field.

This is just one classic example of how all team members of an agile team should be working as a single unit without separating between programmers and testers. Otherwise, the entire project can fail by letting the testers know that if they cannot handle basic test scripts they have nothing to offer in the agile world.

In the opposite direction, a strong agile team will not let testers do this job without a supporting framework from the rest of the team. The team must understand that any automation project is the team’s problem, not the responsibility of the testers just because they were responsible for quality in the old traditional environment. 

Job security

Agile teams contain testers that in many cases were added to the team as a legacy from the previous environment. These testers often do not have coding experience and therefore focus on manual testing, which is less suitable in an agile environment. These testers may reject the idea of using automation processes for testing for fear of it making them less relevant in the future.


Fear of failure

Due to challenges inherent in automation projects, from determining the goals to the implementation itself, automation projects can be scary to engineers. As I learned over the years, programmers may have the knowledge to write great production code, but once they focus their efforts on writing automated tests, they will face many logical and technical issues that they do not have in their day-to-day work.


Lack of support and knowledge 

I think that every engineer who has common sense about automation’s benefits will want to use it to simplify work. But what happens when this engineer has neither the knowledge nor the time to invest in the creation of this framework? How can he free time, in an already stressed environment, to learn new tools such as automated frameworks and design practices such as TDD and refactoring?

To allow the team to gain the knowledge they need to master this area, the organization must step in to provide the necessary coaching. An external expert is a great option to help the team get set up and save a great amount of time. Coaching is needed in both the theoretical and technical aspects of automation practices. More important is to free the team so they can really learn and adopt this new approach in their day-to-day activities.

An investment that will not pay off right away

The team will need to invest time to create, plan, and design the automated solutions that will reduce the manual work in the long term. But although the benefits are clear, we need to remember that even with the entire agile team working on the automated solution, it still requires a big up-front investment that will reduce their ability to deliver functional PBIs in the first few sprints. This can be a big problem in an agile environment.

There is a huge psychological barrier that agile teams and the organization run into when they understand the investment that they need to make at the beginning of the automated project, an investment that will not pay off right away. Both the team and the organization must know that it takes time to decide on which processes and tests should be (and can be) automated as well which frameworks to use.

As I’ve seen in almost any automated project, the team must show senior management how automated solutions will help the organization increase the ROI. Although they will not see an increase in ROI in the first few iterations, without knowing the benefits, there is no chance that the organization will allow the team to invest the time they need to succeed with their automation challenges.

Thanks to Tally Helfgott for proofreading :) 


Linkedin Profile

My Presentations