In this post I'm focusing on how I estimated the testing effort for a tender, for which I had access to an immature functional Performance specification (FPS). The FPS documented each of the system requirements, and also had an annex which detailed a series of fleshed out business scenarios(usecases). Each of the use cases outlined via a process diagram the main steps in the 'happy path' and also defined the most likely (but not every) alternate path. This information, even though immature and incomplete proved invaluable with constructing my testing estimate model.
The way that I approached the estimation was using the assumption (well educated & researched guess) that each of the requirements would need at least one test case to be created in order to verify its compliance. I also allowed for an additional test case per alternate path identified in the use case. This brought the number of estimated test cases out at 160. History tells me that (if we win) once we start the test analysis and design there will be instances where several requirements are covered by a single test, and other requirements that will demand several tests.
Of course, in the situation were you are estimating for a known application, then there are heaps of metric's surrounding test cases and requirements that you should be able to draw on to assist with your estimation.
Next, I calculated the amount of time it would take to analyse, design, document and verify the test cases (on average). Based on a verbal description of the system we'd be testing, and using all the information I could find I determined that 2 hours per test case should be enough. And this is where the hokey poky started!
Some of the estimation team disagreed with my estimate, stating that its not possible to analyse, design, document and verify a test in 2 hours. I agreed to disagree, as there are a number of reasons why you might not be able too, but equally as many reasons why you certainly could it all depends on the complexity of the requirement being tested and the system implementation and so on. So using the principal of 'you can never have enough time to test, the team then agreed that 4 hours or 1/2 a day would be a more palatable estimate, and so I increased the estimate...
Testing is like a balloon being filled with air, the more air you put in the bigger the balloon. Similarly if you let some of the air escape the balloon decreases in size and takes up less space. Determining what the 'optimal size' of the balloon all depends on where its to be used, and its purpose.The same can be said for testing it all depends on the type of testing, and what risks you are trying to mitigate. In the event that the balloon bursts, I believe it more a case of poor management of the testing process. By trying to squeeze to much testing into a confined time box, BANG, the result can end in a catastrophe. It's often not the testing that suffers, its the quality of the application released. One of the biggest lessons I learnt when first estimating testing, was the testing schedule had to include time for defect remediation. I found it necessary to include because all to often the dev team's schedule finishes once the application is delivered to testing!
Back to the hoky poky, "my estimates in" and then after having submitted the initial adjusted estimates, the review team came back to say the cost of testing exceeded the cost of development and its a COTS product! They asked if there was any activities that we could cut back? LOL, I said well we could go back to the original estimates I gave? Of course this slashed the estimate by 50% which (due to the original doubling!). The estimate was resubmitted.
Just like the hoky poky, my estimate when in, it came out, back in again and then I had to shake it all about! Now we have to win the rest of the work to see who was closed at pinning the tail on the estimation donkey! stay tuned.
No comments:
Post a Comment