I've often been quoted as saying "I just break it" around my workplace, and to some extent there is some truth to the catch phase.
On my current project, like most projects as the test manager I'm not in control of any 'development/technical resources' and therefore the most visible aspect of what my team does is gained via the incidents logged when something is considered 'broken'. Of course there is far more to testing software than purely seeking to break the system, it is equally as important to prove it works! Though more often than not (I've only worked at one site were the number of completed tests was more important than the residual defects) the decision makers focus there attention on whats not working rather than whats working. For example recently I prepared a presentation on the progress of testing, one slide for test procedure results, four slides for outstanding defects and when the presentation was given there was an exponential difference in the time spent discussion the procedures vs the outstanding defects. Some (including me) could/would argue that the discussion of the outstanding defects given the link to failed/not completed test procedures, should be attributed to the procedures slide discussion, rather than the defects. However at the end of the day we tend to focus on the broken bits more so than whats working because of the broken bits implies it requires fixing and that has a visible/measurable resource cost associated with it.
So you may ask why do you tout the "I just break it" ideology when there is sooooo much more to testing than just breaking software. Well I see the role testing to give an assessment of a software solution (component, system etc) at a point in time, against a set of criteria. The criteria could be a requirements document or a capability brief or a verbal direction. The end state of this process is that the testing outcomes (test case results and incidents) are delivered to the requester for consideration. The requester will (should/might) take the outcomes into consideration when making their decisions. I use the just break it analogy to draw a conceptual boundary around the outcomes I (and my team) will provide, so there should be no misconception that at the end of testing the software will be (close to) bug free - instead collectively we should be better informed about the software's' operations and limitations. We should also understand the effort required to 'close the gap' between working and not working and were the effort needs to come from.
And while we/were are talking about catch phases - you may or may not have heard of Catch Phase the TV game show on in Australia (Channel 9 I think?). I recall the host saying to the contestants "See it and say it" and that's now what I say to my team in regards to system abnormalities they observe during testing. Its been an essential part of my testing philosophy for many years now that just because an incident is recorded doesn't mean its going to be fixed. And to those who scream out about the statistical implications I say - there are three (3) types of lies; lies; damn lies and statistics!
No comments:
Post a Comment