With increasing budget pressures and expectations to deliver more in the same engineering time, today’s managers are faced with situations where they have to make the right decisions quickly. However, most often than not, they are unsure of whether the decisions they have taken are correct, and will produce the desirable results until they are half way through the project. With project releases shrinking, it only adds to the effect.
There can be several factors that influence the outcome of your deliverables. In software test automation those could be whether one should focus on automating regression test cases that haven’t changed for some time, or should the focus be on those features that are built to attract new customers, or the business critical aspects of the applications needs to be addressed first. What your business objectives are, who your target audience will be, how soon you want to release the product to market, which areas of the application are most important to your customer, how many feature to cover and how much to cover? All these questions need answers before dwelling into automation.
Building up costs
Software test automation has the potential to decrease the overall cost of testing, and improve software quality. Test automation raises people’s hopes yet often frustrates and disappoints them. Many groups that implement test automation programs run into a number of common obstacles. These problems can lead to test automation plans being completely scrapped with the tools purchased for test automation becoming expensive “shelf-ware”. Often teams continue their automation effort, building up huge costs of maintaining large suites of automated test scripts that are of questionable value.
Many teams acquire a test automation tool and begin automating right away, with little consideration of how they can structure their automation to make it scalable and maintainable. Little consideration is given to managing the test scripts and test results, creating reusable functions, separating data from tests, and other key issues which allow a test automation effort to succeed. After some time, the team realizes that they have hundreds or thousands of test scripts, and thousands of separate test result files. The combined work of maintaining the existing scripts while continuing to automate new ones requires a larger and larger team with higher costs and no additional benefit. As teams drive towards their goal of automating as many existing test cases as possible, they often don’t consider what will happen to the automated tests when the application under test (AUT) undergoes a significant change.
The above discussion indicates the need for an automation strategy in these situations. For example, automation would be a waste of effort if the tests would never be repeated. Automation requires a proper methodology in architecting and implementing it. For example, writing automation for a user interface that is not stable is clearly a throwaway effort. People sometimes view automation as a programming project – which is another recipe for disaster. Automation requires much different and rigorous treatment than other programming projects.
Will it be different if I have an automation strategy?
Let’s take a situation where a team starts automating as soon as test cases are in place. They identify a popular tool or worse, just pick up a license that has been lying around, hire a bunch of programmers, train them if necessary on the tool, and throw at them the test cases that need to be automated. The overall work is estimated using a simple formula involving number of test cases and the time taken by a few sample test cases. The programmers usually have no idea about the business goals of the product and even the business logic of the application. They start coding the test cases one by one. No attention is given to automation architecture and design; very little of the software development process is followed.
Looking at the above example, a good project manager can guess that the estimation would be wildly out of track. Lack of a proper design can cause problems with work distribution. It would result in code that is not maintainable, with repeat functionality, and modules that don’t interface very well. Delivery deadlines are missed, and cost of the project can go out of the roof. Delayed availability of automation causes manual test passes to continue, increasing the cost of testing further. Inadequacies in the tool are discovered, causing the need for custom modules, or even replacement of the tool itself midway in the project.
The need of automation strategy is quite clear in this situation. The strategy phase helps understand the test organisation, existing test practices, testing problems, and past successes. Once these facts are gathered you will be able to define test automation goals for critical areas (i.e. where automation is the only solution), and to obtain cost and time benefits wherever possible. You will further analyze the functional test coverage and propose, if applicable, additional automation test cases to improve the coverage. The strategy should also include recommendations on test automation tools and harnesses to assist in the implementation.
Does this result in a better ROI?
In order to ensure that you get the right value for your money, it is necessary to put together a plan to implement automation which would deliver the greatest ROI. The plan should include a well-defined test automation vision, answer questions such as what, when and how of automation, and propose a prioritised test automation plan.
Your automation vision should include:
• What and how much to automate
• When to automate and how
• Prioritised test a utomation plan
Some of the factors to consider for a better ROI:
• Areas to automate – UI / API layers, integration points, load, data migration, test environment setup
• Factors – complexity, repetitiveness, maintenance, manual vs. automation
• Incremental approach to maximising automation
If the above attributes are considered, then the success-rate in the project is higher and we internally are confident that we (not the customer) will be the first to notice failure to meet the requirements so we can address the issue proactively.
Your automation strategy will be stronger if you:
– Select a tool that is best suitable from cost, technology, and other points of view
– If the selected tool isn’t sufficient, identify the customisation required or if you need to build a completely new tool from scratch
– Define an ROI that is practical
– Define dependencies, such as, impact of changes in dev plan, features, etc
– Make recommendations on the time-line for automation
– Select appropriate features and test cases for automation
– Give a realistic budget for automation
– Provide a high level design that is scalable and allows future addition of test cases
– Provide a plan to manage and maintain the Automation (post-delivery)
From the erstwhile discussion, one can say that if you have a well thought out test strategy and your automation plans are in-line with the strategy then you are upfront aware of the risks involved, dependencies and trade-off that exist on your deliverables, certain that the test approach you have chosen is the best for your business, you have proper metrics to track your progress, and a mechanism to optimise (both infrastructure and the solution) as you progress in your automation effort. The following diagram illustrates the above discussion in a single view.
Figure 1: Test Strategy
It should be clear from the ensuing discussion that the success of automation projects can be greatly assured by spending some time up-front in devising a proper automation strategy. One of the challenges that needs to be sorted out is how to reconcile the need for automation strategy with rapid development environments like Agile in which some of the questions asked to build a strategy are best answered somewhere downstream in the product life cycle. With the entire software industry evolving so rapidly, this need will have to be addressed, and addressed quickly.
Ramanath Shanbhag is a Test Director at MindTree Ltd. He manages innovation (including R&D), process standardisation, delivery excellence and delivery assurance activities of the practice. He has played key roles in business needs analysis, organisational assessments, assisting in building Test Competency Centers and testing, training, and consulting.