How much automation coverage we should have? Why to automate something when we are not going to modify it in the next 2 iterations? Why not automate only regression scenarios? My automated tests do not catch bugs ! It’s a very complex manual testing stuff and takes me a considerable amount of time to execute it, so I want it to get automated !
Well these are the typical questions which I always get to listen in any meeting which talks about “Automation Coverage”. Why such a bug fuss around it? I have worked on several project which ranges from pure manual testing to pure automation testing. So before starting to create automated test we need to answer few questions. Why are we automating?
- Is it a deliverable to client?
- Is it to create a “Safety net”?
- Is it to ensure sanity of the application before the build formally comes to QA?
- Is it to reduce manual testing effort?
- Is it to catch bugs?
- All of the above
Once we answer these questions before rolling out the automation plan, we need not inscribe them on stone. We need to revisit these questions time to time over iterations or from one phase to another phase of the project. Nothing is absolute and perfect, I believe there is always a scope of improvement and optimization. Purpose of automation may revise over a period of time in a project life cycle. Eventually it has to address all of the above asked questions. There is no point in creating automated suits if they cannot ensure that the functionality which was implemented yesterday is working fine today, if they cannot reduce day to day testing effort and last but not the least if they cannot create faith in testers that “All Izz Well” before taking build for full fledged QA testing.
So how do we address the basic question of automation “Which test case to automate?”. Generally in a project we have thousands of test cases for various stories, modules and integration points. Which test case to pick up, all Priority 1 test cases or all happy path test cases, some integration test cases. There is always a lot of confusion for this. Ask any tester who has written the test case for a module, for him/her all the test cases are important (otherwise he/she wouldn’t have written that test cases :P), so they will always find it hard to tell which “Not to Automate”. But the automation should not get driven by which “not to automate”. I always follow a 70-70-70 rule; automate 70% of the test cases which covers 70% of the functionality and are executed 70% of the time. This actually creates a sufficiently large umbrella which can prevent you from getting drenched and ensuring that your application will work tomorrow morning if all the automated tests has passed in the nightly scheduled process.
Why 70 why not 80, this will be a part of some other blog…