How our approach to testing automation drove efficiencies for a big four client
Discover how our testing automation solution helped our big four client save time and resources while improving their overall testing process.
In this article, experienced QA tester at Headforwards, Gowri Thota, shares his thoughts on successful testing regimes.
The most important factors in successful testing are automation, and coordination between the developers and the testing team.
In one project I worked on the team hadn’t previously automated any of their tests, but building automated testing into the development process, using Microsoft’s EasyRepro for Dynamic 365, helped to uncover bugs before they reached UAT (user acceptance testing).
There are a huge range of automation tools available, often applied via browser extensions, and including those for testing accessibility for users with, for example, visual impairment. The right tools vary depending on the project and its development environment.
Whatever the tools, automation will only find issues that are in scope when the test cases are written. Also, some functionality is not suited to automation, for example where a lot of human intervention is needed.
So, in another project it was the coordination and interaction between the testers and developers that made all the difference. Here we were able to enforce strict locking of the build at certain stages to prevent other developers making changes, except bug fixes, to the build being tested. Again, this was effective in reducing bugs encountered in UAT.
In the latter project we followed the TDD (test-driven development) method, writing automated tests at the start of the sprint, based on the required functionality.
This saves you time as a tester because you write the test cases while the developers are coding rather than waiting for the code to be implemented. The automated tests are run initially against the unchanged code and are expected to fail. This means you get to understand the failure response as well as the behaviour to be expected once the code has been updated.
Then when the coding is done, and the build is locked, you have the automated tests ready to be run, and this time they should pass.
This technique enabled us to build a good set of unit tests and comprehensive test scenario coverage.
With the automated tests already written, you have time to do some exploratory testing of the updated code.
This is testing beyond the acceptance criteria, to effectively try and break the system. It’s a manual test process, based on your knowledge and experience rather than test cases or test scripts. As such it is especially fruitful for experienced testers who have worked on the project for some time and are familiar with the application’s process flows.
As part of exploratory testing, you can also test any known bugs to ensure they don’t sneak into UAT. Never assume they are fixed, as they may have been affected by other changes, always test!
Any new bugs found during exploratory testing, provided they can be tested via automation, should have their test cases added to the scripts so subsequent automated test runs will cover them.
After following the techniques above, the only bugs found in UAT should be related to the UAT environment rather than the new functionality.
In any case you need the mindset that finding a bug in UAT is good, because it means it can be corrected before the code goes to production!
Headforwards™ is a Registered Trade Mark of Headforwards Solutions Ltd.
Registered Address: FibreHub, Trevenson Lane, Pool, Redruth, Cornwall, TR15 3GF, UK
Registered in England and Wales: 07576641 | VAT Registration Number: GB111315770