Tuesday, February 23, 2010

Defects are requirements?

It was a throw away line over a beer "Defects should be written as requirements..." The context of the conversion was talking about delivering a testing training course and the topic of defects.

At the time I had "Ding" moment where I thought its such a simple concept. All testers understand that defects are important, but as possibly the only tangible outcome of testing we should give them more focus.

When I think of the conversations about testing I've had of late at go, no go meetings, almost all have glossed over the test results and focused on the outstanding defects. If I calculated the amount time I spent translating and interpreting what the functional impacts were, the associated effects on users and risk to data - well it would be days. I think that if I approached the documentation of defects with much of the discipline I'd expect to find in the documentation of requirements then I'd be that much better off.

Speaking of requirements, there is a lesson we (as testers) should learn and its one we try to teach on a daily basis. The quality of requirements has a major impact on the quality of the system built and the testing conducted. Well, our defects are mini specifications on how particular system function should work and its content is of poor quality, then the probability that the fix implemented will be sub optimal is increased.

Lastly, reason for making an effort to document defects thoroughly, is that one day down the track you might have to retest or reproduce the defect. If you've only entered scant on details in your hast to raise the defect, then it makes your job that much harder! I know I've had many instances where I've read a defect report that I'd raised and had to scratch my head to fill in the gaps I left.

Friday, February 19, 2010

It's the People; It's Always the People

An email arrived in my inbox I came across an email announcing the conference program and this particular line caught my eye. "It's the People; It's always the People" which turns out to be the title of a Key Note presentation to be presented by Johanna Rothman at the Better Software Conference 2010.

Without even reading the abstract of the presentation I was in violent agreement. So far through my career I've found that the development of relationships with those whom I need to work closely with, and those who's services I consume invaluable. In many instances its been the relationship with people which has enabled me to achieve my goals, and the lack of relationships which as hampered my progress.

All relationships have a foundation build on respect, trust and symbiotic needs; All relationships require maintenance as they change over time. The ability to form and maintain good working relationships is an essential skill for any testing professional, and is a key factor in success....

Wednesday, February 17, 2010

A dartboard and levels of testing?

"We need to create a regression test matrix" is a statement that I have heard several times. In conversations prior to todays I've nodded wisely and agreed, but not acted. Today though I nodded and smiled, there may have been even a little bouncing!

Why? I hear you ask well today a potential solution appeared before my eyes whilst I was trying to solve another problem.

Taking a step back, today I was tinkering with various diagrams to display the different levels of our manual, functional testing. The diagram was to be included in the release test plan. I created three (3) models which displayed Acceptance aka ‘Full Set’, Regression and Sanity testing covering different proportions of the system...

















The models are simple in construction and there purpose is to highlight the differences coverage and that none of the testing covers ‘all’ of the system functionality.

The relative size of the objects in the diagram, though not explicitly, gives an indication of effort to complete that level of testing. Relatively speaking the full set of tests is somewhere around 50-100% larger than the regression suite, which is 50-100% larger than the sanity test suite.

At this point I discounted the excel pyramid graph and started to focus on the boxes and circles. I feel that both pictures provide a reasonable representation, but there was something about the circles that I kept coming back too.

Fast forward an hour and I’m into the weekly status meeting and due to our recent production release and impending production patch cycle the “we need a regression test matrix” comment was made. We discussed the matrix should take the form of a excel spread sheet and the need to conducts tests across the system as well as focus on the areas of change was required. My eyes began to widen! Circles was the choice and I thought the easiest way to explain where we’d go into more detailed testing was to draw smaller circles around functionality and then bang, I hit the bulls eye so to speak. If you picture a dart board it’s made up of wedges (20), each with two large sections and two bands.

So to my original circle diagram I added several wedges, as I already had the rings (test levels) present. The result is shown below. Each of the wedges represents a slice of system functionality or grouping of functionality.

Now each time it comes to patch testing I can plot which areas of the system are going to be tested more thoroughly using a set of darts, along with a steady hand and intense concentration.

Thursday, February 4, 2010

One from the archives

Have you ever been asked ‘why do you write test cases?’ what’s your answer? There are several that spring to mind; because that’s the way we test software, because the contract said we had to, the customer demands we do, for auditing, so they can be given to the testers to execute… My believe is that regardless of the reason you think you write test cases, the fundamental reason behind test case documentation is to build and capture knowledge about how a system (or function) should work.

A colleague and I were once debating the merits of getting test cases peer reviewed, his position was that the peer reviews provided demonstrable value to the QA process and mine was that peer reviews provided questionable return on investment! After what seemed like a rather long time, the point came were my colleague said ‘well why do you think like that?’ The answer I gave was this ‘well, I believe that the process of writing test cases is how the tester becomes one with the system, learning about what its meant to do, how it should do it and so on...’ And therefore the reason that I question the value of a peer review is that more often that not, the reviewer doesn’t have enough detailed knowledge of the system or functionality to critic the test cases.

To which my colleague replied ‘that a bit Zen Drew!’

From that moment on I’ve seen the process of developing tests as a learning exercise! Needless to say that many years down the track I have used this apifenany as the foundation of my approach to software testing. I still believe the fundamental reason for documenting test cases it to acquire the knowledge about the system so that when it comes to execution the tester is able to evaluate the output and determine if it’s valid.

I’ve used these fundamental believe to underpin the methods and processes I’ve implemented at various sites, and no more so than when I started a new job at an organization that uses ‘Agile Principles’. My introduction to Agile was rather confronting at the time, coming from a predominantly ‘waterfall’ back ground.

Being somewhat new to ‘Agile-iterative’ development I search the internet and text books reading about Agile and principles of developing software using these methods. I learnt very quickly that it’s important to note that no two implementations of Agile are the same, Agile is a philosophy which embraces’ certain values. So when I talk about our Agile, it’s going to be different to your Agile and also different to their Agile, but look beyond the detail at the bigger picture about the process of learning.

The most important lesson I learnt rather early on in this new job was who to turn too if something didn’t make sense, sometimes it was the development team and other times it was the business customer. This isn’t usual – in every project I’ve worked on there was a (or several) key people whom where the fountain of knowledge, the difference here is that there was no specification document to scribble on! It’s not that documentation doesn’t exist, rather that it doesn’t exist in the same format as most people are used too! In many ‘Agile Shops’ the specifications are living – contained in stories that grow as the system grows – this is a story for another time. As the months past and I came to terms with ‘Our Organsiations’ development model I found myself questioning and comparing, trying to align the processes we were using with the ones that I’d used in previous testing engagements’. Other than conducting the testing of our applications I was also attempting to teach a new tester the fundamentals of testing, trying to apply the teachings’ of my classically trained testing methods in a world where the fundamentals’ are rather different. It was through this teachings that I came to the following conclusion; the differences in our development model when compared to the previous testing models was that 1) the time to learn the system (i.e. develop test cases) was far reduced and 2) the primary source of information for the learning’s was from the master’s within the project.

And so was born the analogy which I use regularly, in regards to learning the system I like to describe the difference between test case development in an Agile type in environment is that the knowledge is acquired through the learning by doing, somewhat hands on akin to an apprentice learning through an apprenticeship. The similarities are many, an apprentice begins there learning’s with an idea about the job they are going to be doing, and then is set small tasks by the master. As the apprentice learns and grows as a trade’s person, the master increases the complexity of the tasks and the apprentice is more capable to complete the tasks. The hands on learning is supplemented by small (in comparison) structured teachings, which are then reinforced on the job. Testing within our development process involves a very similar structure, at the commencement of the iteration the testers gain an understanding of the features that are to be implemented. Then as the iteration progresses and the stories begin to be delivered the tester tests and learns, then tests some more. The job that the tester does is overseen by the business customer and further lessons are provided by the developers. As the system grows, so does the testers understanding. The other important point to note is that through the apprenticeship, the number of text books studied and volume of notes created is minimal and when you compare this to the number of texts and volume of notes that a University produces and the picture is somewhat complete. You see like a University student, a tester on a waterfall type project the tester attends design & information sessions (like lectures) takes notes and then studies the design documents (text book or course notes) and then produces test cases (assessment tasks). During the design and build stage (semester), this process is repeated several times and then when it comes to test execution it’s when the testers knowledge of the system is tested (Exams!). One other important comparison is the ratio of ‘students to teachers’ and apprentices to masters, in the famous words of Jedi Master Yoda ‘Always there are 2, no more, no less, a master and an apprentice…’ in our development model the number of people involved is much smaller to ensure the lines of communication are short and fast. Compare this with the university model where each lecturer may have anywhere up to 100 or more students and several tutor’s to assist with the learning. An interesting point.

So to summarize, if documenting test cases is about learning the system, then the some of the differences between Agile and Waterfall testing environments can bee seen as the differences in learning philosophies used in Universities and apprenticeships. Focus on the process of learning, not the outcomes that could be achieved through the learning method.