Sunday, July 16, 2017

Why automated end to end tests may not solve your quality issues

Our customer asked us to help us with an approach for end to end testing. The problem the teams were facing was mainly due to the errors detected during the integration phase/ environment where end to end tests were executed. During the discussions with the teams we noticed that the organizing of teams was based on loosely coupled software components. The teams worked on individual components that were versioned, tested and deployed independently. They had development or test environments were the system tests were executed as part of the build or release process. These system tests were executed against the stubs of the dependent components and verified successfully.

The teams used to get requirements per component which did not specify the entire use case, most of them were in the format of “if a message in format XYZ is given to the component, then do something and send the output to another component in the system in a different format.” Each team developed these user stories and delivered the stories with tests to ensure that the components worked fine, given the correct data is provided in the expected format.

But there was no guarantee that the whole system was working as expected. Most of the time, there were integration issues, due of the wrong versions of components in an integration environments or schema mismatches etc. Different component teams worked in isolation and failed to communicate about the recent releases to other teams. This introduced the need of a separate team to test the system as a whole. The responsibility of this team was to create end to end tests to check and ensure whether the whole system worked properly with multiple components deployed.


The end to end tests now helps in identifying  integration issues before deploying to an higher environment. Only the problem was not completely solved!!!

The issues detected by the integration test team was reported back to the component teams and which was normally not picked up in the current sprint, but has to be put into the backlog for later iterations. This introduced waiting periods and tensions between the different silo teams. Developers were working on parts of the complete applications who were not aware of how the complete system functions were not able to test the system as a whole.

End to end testing was not a solution to this situation, but just a workaround to suppress the problem from getting into higher environments. A better solution will be to organize the teams into feature teams where the team can shift their focus on value delivery and work on business features rather than technical components. Now the teams have acceptance criteria that maps a customer centric use case, rather focusing on a part of the use case. Compared to the component teams, feature teams can provide better design decisions and thus better tests because of the knowledge of the whole application,  technology stack and use cases. The automated tests delivered as part of the user stories now cover the whole feature and not only part of the application.


The feature team as a whole now have the required skills to implement the entire customer centric feature. The integration test team members are also part of the feature teams and assist in ensuring the correct functional tests are captured as part of the development process.

No comments: