Testing Automation and tools

Having worked in the development side of projects for so long, donning a tester’s hat for some projects at my current workplace was a new experience, nevertheless a useful one. It was almost like starting from scratch. I always had this idea that testing was largely a manual process. Yes, I knew there was automation testing and I was aware of tools such as HPQC, QTP and so on. But, having never used them before, I was not sure how they were leveraged to test the multitude of requirements. So, here go a few of my observations when performing a tester role.

The first input for anyone involved at the ground level, be it a tester or a developer, is the functional specification. A developer goes through this functional specification and usually there will be a few rounds of brainstorming or discussion sessions to understand the requirements so that they can come up with a technical design. A tester goes through the functional specification and comes up with a test plan document detailing the level of coverage, when it will be tested, in scope and out of scope items, any assumptions or limitations, acceptance criteria, and so on. Just like a technical design is input to the actual development, a test plan is input to the actual test scripting that happens later on. So, at this level, it’s pretty much a similar path for both a developer and a tester.

A developer develops the solution based on the design document. The tester, however, needs additional inputs before they can develop the test scripts. This is the design document. The design document is an important additional input to a tester. It helps the tester to understand the solution intricacies and reaffirm or revisit the touch points for the testing tool to the solution. It also helps in scripting new scenarios, thus improving test coverage and solution reliability.

From a developer’s point of view, any additional checks may not feel very important. When a developer unit tests a solution, to be honest it goes with a few assumptions about the code they developed, such as a mandatory field will always have a value; database connections will always be available. A tester, on the other hand, has to consider the solution from a what-if perspective. What if the field does not come at all? What if the database connections are not available? These questions, when posed to a developer, help to build a robust solution that handles all possible scenarios gracefully.

This what-if approach is one important skill I learnt when working as a tester. When back into development, it helped me to build a solution that handled scenarios gracefully. Even for a single field addition, I now ask the questions: are there any scenarios where this field may not be part of the input at all? how is it expected to be handled? is this scenario handled in the source system or is it handed in the target system? As a result, there is no scope for a missed valid or invalid scenario.

The other area of improvement I noticed after my stint as a tester was the way I looked at my unit testing scenarios. When unit testing, we tend to create one test case and modify the inputs to test the scenarios. Or we create multiple unit test cases. With the one test case approach, I would not have the input data for all the conditions that I have covered in unit testing. With the multiple unit test cases, in few cases it leads to redundancy. In the automation testing tool that I have used, we had a possibility of data driving the tests. So you always have a record of the data you have used in testing. I have taken this point into my unit testing and I now try to data drive my unit tests wherever relevant so that I have a record of the data I have used for testing. Before I start my unit testing cases, I pause to think. Is there a possibility of data driving my unit tests? Or would it be a better approach to have individual cases?

Another area that I felt was largely ignored but extremely useful was discussions between a developer and tester. It is only when the tester raises a defect that the tester and developer actually interact to resolve the issues. Some of the scenarios scripted by the tester could be invalid and some of the defects raised by the tester may be incorrect. So, there is a lot of friction between a developer and a tester when testing commences. But both these roles are important and both work towards a common goal. I have personally seen a lot of benefit when a tester is involved in discussions with the developer and involving testers in discussions as a developer.

My perception of testing has changed a lot after this experience. It helped me to understand how a functionally competent tester can help to improve the overall product quality. This blog has been from a middleware interface automation testing perspective. Hence, some of the documents or terminologies used might be different to a general application testing and development perspective and management levels.

Want to learn more? Download our technical paper Test Tools and SAP.

The following two tabs change content below.

Sailaja Jonnavithula

Sailaja works as an Integration Consultant at Sandhata. With close to 10 years of experience using integration tools such as TIBCO BW, Vitria BW and SAP PI and her extensive work on RIT-SAP integration, she is a key contributor for Sandhata's CoE for SAP Testing. She helps our customers bring their integration solutions to life with her extensive knowledge and experience in integration. Most recently she has started working on IIB, IBM Integration Bus.

Latest posts by Sailaja Jonnavithula (see all)