Thursday, December 30, 2010

QC 11 - Return of the Media....

One of my recent posts I described the trails and tribulations of an on-again, off-again migration to QC11, thawed at the last moment. Well now for the next chapter, all be it rather short.

I was lucky enough to receive a MacBookAir 11 inch for xMas care of my lovely Wife (Yay Babe) and in the process of transferring/consolidating all of my work files from the various Laptops and PC's used to use onto my little shiny MacBook I happened to stumble across a zip file "T7333-15006_1a.zip" in one of the directories. I thought it looked like a HP download and as luck would have it when I opened it it's the install media for QC10 - hooray. Now combining this little fella with the QC10 patch we located earlier and guess the move is back on!

Sunday, December 26, 2010

Testing Utopia

In line with some of my recent posts about reporting, requirements, defects and test execution status - I think that I have narrowed down what utopic testing conditions would be. It's a big call but here is what I came up with; All of the...
1) Requirements (functional and non-functional) fully derived and understood,
2) Requirements covered by risk appropriate test cases/scripts/procedures,
3) Documented test cases/scripts/procedures executed,
4) Defects discovered either resolved or well understood and documented,

That's the beauty of utopia, its so great, dynamic and unattainable but well worth striving for...

I've reached the point were I have a vision of what this Utopia looks like and how I might need to report how far away we are from it. It involves a data cube which contains information about requirements, tests and defects. Each of the data objects has records attached requirements have links to tests and defects, a priority and risk assessment. Tests have links to requirements and defects, but also records of each time the test was executed (results), priority and complexity rating. Defects have a status, severity, complexity and business impact. I'm sure there is a few things I have forgotten, but they can be added. I think that I have been asked to provide a report based on all of these attributes at one time or another...

QC11 - The next chapter

This story begins several months ago, when I downloaded the trail version of QualityCentre 11 with only intensions of 'playing'. Yeah right! As with most software trails I played hard for the first few days and then the interest slid south. I liken it to the Garnter hype-cycle... I climbed to the top of the "Peak of Inflated Expectations" and then coasted down to towards the Trough of Disillusionmen - but on the way down I had several little bumps which again have tweaked my interest. I'm not disillusioned at all, rather beaten by circumstance - let me explain.

In my current role, like all that have come before, I am the administrator of our QC instance. As such when it begins to play up, then the server/infrastructure team comes knocking on my door. Recently my little QC has not been playing nicely with the other applications in the sandpit - namely SQL and MOSS. It appears the QC service is hogging all the CPU on the server and as such the other applications, which are also critical, suffer. All this means is that nobody is happy with the testing team!

The solution I thought was to simply capitalise on my investment in the QC 11 trail and install QC 11 on the new (well additional server) and then migrate the old projects from QC 10 on the old congested server to new, fresh and totally dedicated QC 11 server. Oh if it were that simple then I guess we'd have done it my now!!! The first hurdle I encountered was that the trail period for QC11 had expired - not really a big deal I thought I'll just build another virtual server and install it again, besides I could use the practice for when I have to do it for real. First mistake, foolishly I thought that since I'd done it before, blogged about my experiences it would be a smooth process. Seems I was wrong, I encountered several of the errors I had seen previously and a few I hadn't. This time around the QC Service failed do to a java issue - the 'java virtual machine failed to load'. I so tried at least a dozen different ways of setting JAVA_HOME but none seemed to work. I reverted to editing the .bat file with little success, at one point I even installed JBoss standalone. This was the turning point as I discovered that JBoss would run with all the setting as they were. Now having narrowed in to the problem (I thought) I investigated the jboss installed by QC only to follow it down a rabbit whole the likes of which Alice in Wonderland would have enjoyed. Though at the end of this wonderful adventure I discovered I'd been looking the wrong spot - doh. All of this adventure was on my virtual PC at home. We then tried it for real on the new server.

Not surprising we encountered the same issues on the new server as I had had on the virtual, this time we were quicker in debugging the issue. We were able t o get the QC 11 instance up and running, all be it through the .bat file rather than the QCServices. It was at this point that we encountered what would be the mount everest of problems. When we confirmed that QC had started we navigated to the url using the browser which was installed on the server (win2003) which happen to be IE6. A simple detail which I over looked in my excitement to install the trail, and the fact that all my playing was on a win2008 server which has IE7 as default. QC 11 requires IE7 or IE8 BOOM our journey had come to a screaming halt. The network that we work on, is closed to the outside world - entombed to operating systems of days long since gone. The governance of this network doesn't allow for changes to occur without a lengthy consultation process etc. So now we're almost back to square one but not quite and this is the kicker - the simple solution install QC10 on the new server seemed to be the obvious choice, but again the best intensions were thwart, no one can find our QC 10 installation media! Guess this is still a work in progress :-)

Monday, November 29, 2010

Interesting bug?


On the weekend just gone, I was the designated driver, driving my sister and brother in-law plus a few of there friends to a party in Double Bay. When I jumped in the car the fuel light was on, and the trip meter indicated we had a range of 60kms. As the trip progressed the range dropped to 50, 40, 30kms as one would expect. I was a little worried given I was in an area I'm not at all familiar with, but my sister in-law promised me it'd be fine - to quote "with a range of 60kms your in good shape!"
So after dropping them off at the party I started the return journey and the range decreased to the point where it reported the range was 0 kms! I wonder if this is a bug in the trip meter software - as the engine is still running, meaning there is still some fuel and hense some level of range. I'm not sure of the age of this car, it's not brand new and maybe the manufacture has fixed this in the next version? I know that in my car, which is a year old, once the range gets down to about 40kms the trip display simply reports 'Fill with Fuel' rather than a specific value - I guess that's one way of doing it!

Failed Test Cases/Scripts - is there another dimension we should consider?

My thought for the day - just because one test step fails, should the whole test fail?

Today I had cause to ponder the status of failed test cases. I'm sure you've been in the situation were as a test manager you've been asked if the system is 'ready' or 'is testing finished'. I was asked this question today for the umpteenth time. The situation was this, we are currently testing an environment which has been refreshed to the current production baseline (without data!) and we ran our regular 'sanity test' to see if everything was in the correct state. Situation normal. The sanity test set that we run are a high level set of tests that cover all the key functionality and integration points. Often we run this set of tests and there is failures i.e. Exchange isn't working or the search crawl hasn't run. Today when I reported the results back to the Mgt team for some reason the words coming out of my mouth didn't seem to make sense. The stat's were something like 22 passed, 4 failed and we are good to go. short pause, we've raised 3 defects and we are good to go. hmmmm, this is were I started thinking so we've got failed tests and defects but we are still good to go? The fact of the matter is, this is a risk based decision in that the risk of the failures and extant defects in the system causing a loss of functionality or adversely effecting the user experience is low. Still pondering I think that the floor in the presentation of the results (I have) is that all too often we consider the results of the tests somewhat independent of the outstanding defects.

My thoughts today brought me too this conclusion: Each of the test step failures should be given a severity which aligns to the defect which should be raised as a result of the failure. Then at the conclusion of the test (ie when all the test steps are executed) the calculation of Pass-Fail should be an aggregate of the step failures (or not). For each organisation this matrix and algorithm would need to be configured & tweaked but I think it has merit :-)

By reporting on the test status in terms of failure severity I think will bring more meaning to test results (another dimension!). We could further enhance the reporting by assigning each of the test scripts a priority and then reporting on the number of high priority tests with severe failures. Oh the possibilities!

Sunday, November 7, 2010

QC 11 - the saga begins!

I learnt during recently that a new version of HP QualityCentre is available, Version 11 - check www.hp.com. Needless to say I downloaded the new version and set about preparing a new virtual server to install it on. Sounds simple enough, just fire one up and away we go.

Oh no no no, First I couldn't remember the administrator password for the virtual server I set up some months ago for some other new toy adventure! This resulted in deletion of the current machine and birth of a new one to take it's place. This was simple enough except I forgot the lesson that I should have learnt last time - note to self: A harddisk with 10 gig of storage is only enough to install the server OS (Win2008), nowhere near enough for HP ALM to be installed... Or SQL Server which I also found out part way through needed to be installed - doh. By this time I am becoming a big fan of virtualised servers and being able to just delete one, and create a new improved one in a matter of minutes.

Now onto server build three (3) and now we are looking like making some progress. Yahoo, we are past the disk space issue, only to smack into another brick wall - the database doesn't want to know about this ALM product: CONNECTION REFUSED! A quick google and I found out that by default the SQL server install doesn't enable 'Named Pipes' or 'TCP/IP', one of which is required for the ALM installer to connect to the database! Solved that problem with a few clicks of the mouse. I thought we must be close now, but no. The final error was that the service failed to start - I believe (not confirmed) that it was JBoss that couldn't find the 'java virtual machine' which stopped the whole process in it's tracks. Some more googling and using some of the faint memories from when I studied Java programming I managed to set the JAVA_HOME and Path variables to my freshly installed jre6 - restarted the services and hey presto Quality Centre 11 roared into life.

QC 11 has a tricked up user interface, which has a Java Swing feel to it, but I had to install a c++ security thingy when I installed the QC client on the workstation which has placed a small doubt in my mind - maybe it's a .Net web interface????

Here's a picture - what do you think?



So now I'm up and running, feeling somewhat proud of myself for managing to get the installation completed given my technical shortcomings. In the same way that my friends who are infrastructure engineers think that testing is repetitive and mind numbing, well after 3 server rebuilds, 2 SQL installs, 3 ALM and countless service restarts I think it might be a case of those in glass houses shouldn't throw stones!

QC 11 - Sprinter vs CodedUI Tests

As per my earlier post I've started playing with QC, I mean ALM 11, with an upgrade looming at work. I didn't notice when I first installed QC but upon further investigation of 'What's New' I stumbled upon 'HP Sprinter' which is set to revolutionise manual testing (according to the blurb).

My first thought was I've heard it all before with business process testing, automated testing that can be done my non-techie's and alike. However when I watched the promo video I was immediately stuck by the similarities with a demo of the Microsoft's Coded UI Tests I saw earlier in 2010.

It felt like a moment of deja-vu when the presentation began, the image that triggered the memory was when the video splashed up the mercury tours home page and started to enter data. In the Microsoft demo they use a similar website for buying model planes of all things!

On face value both of these 'add ons' appear to offer the same sort of capability in that you can record whilst running a manual test, and the re-use the coded UI test to retest a defect or save time in data entry.

I don't claim to know either of these tools well or have used them other an in a trail, but I like the direction they are moving in...

More to follow.

Friday, October 29, 2010

Scott24 Mountain Bike Race


Recently I competed in the Scott24 hour mountain bike race, as part of the CSC team. It was a great event and I had awesome time. My times were by no means record breaking but they were competitive. The first lap 1hr 22 mins, catching my changeover partner by surprise as they were sure I'd be another 25 odd mins! This earned me the new nickname of the 'Tele Tubbie powerful little legs". The night lap was 1hr 45mins which I thought was good because I started the lap @ 0115 in the morning :-) My final lap was 1hr 25mins, which surprised me because it felt like I stopped a lot more often than the previous laps... It was great to finish with my Girls waiting with "Go Daddy" signs at the finish line.

Around the track there was a mix of automated cameras and photographers so I thought I'd share a few of my favourite shots....

The final Lap, close to the top of the course. I liked this photo because in the background you can see Canberra City.




The Night Lap... I really like the ghost style image which still manages to capture the intense look on my face. Night riding on the side of a mountain certainly had my blood pumping. The quiet stillness of the middle of the night is so peaceful. Though I did come very close to crashing as I came down the last section of mountain I lost concentration for a split second and hit a rut which bounced me out of the seat, and also meant my feet came of the pedals. The bike fish tailed for what felt like 100 mtrs and the fellow behind me as calling out "hold on mate, hold on" and somehow I managed too - phew! The fellow went past and said, predictably "that was close, your a lucky fella!" It was only about a 500 meters to bed from that point :-)


Lap 1 - If I recall correctly this photo was the first photo station and I almost came off because on top of the automatic camera a photographer jumped out from behind a tree to take another couple of snaps...

Saturday, October 16, 2010

IE9 not supported on XP...

Recently I saw that IE9 had been released in beta - liking to play with new toys I thought I'd give it a go. I was quite surprised that I wasn't able to upgrade as I my machine runs WindowsXP...

This post went up partly finished - so now for the rest.

Where I currently work, and the client that we work for both use Windows XP as the Standard Operating System and hence my surprise at the predicament I found myself in. I canvased the office about versions and I wasn't alone in thinking that not allowing IE9 to be downloaded for use on XP seemed odd given the market share. I did a little bit of research and my thoughts were confirmed. The w3schools has XP as the most popular OS having 51.7% more than double that of Windows7 it's narest competitor....

Thursday, September 9, 2010

OzAgile cancelled

I learnt during the week that the OzAgile conference has been cancelled for 2010, and hopefully to be run in 2011. I was going to be a presenter, along side many of the Agile 'Rock Stars' who've written many of the books that are referenced throughout the agile world... Stay tuned as now I'm looking to present at another conference some time soon!

- Posted using BlogPress from my iPhone

Saturday, August 21, 2010

A simple example of the escalating cost of defect detection


We've all been taught that the cost of finding defect later in the development/testing cycle is larger, than the cost of remediating those defects detected early. I've seen the statistics, heard the teachings and also taught this lesson once or twice myself.

Last week I came across an excellent, but simple example of how the cost escalates, and it's not just because the dev and test team need to work extra hours to effect the fix!

The example of cost came as I participated in/observed the process of investigating 2 defects that were found in production a few days after a release.

Upon reflection I've looked back at the regular defect triage meetings we held during the testing cycle, on average there were 3 participants; the development manager, either a senior developer or senior tester and myself. The meetings often lasted 30 mins, so to keep the calculations simple if each of the resources who attends cost $100 per hour, then the cost of these triage meetings was $150.

Before UAT commenced, we had a meeting with the users to walk through the outstanding defects going into the UAT phase (and our plan to address them). At this meeting we had the usual triage team, plus 3 users and an extra 2 from the dev/test space. This meeting went for 1 hour and using the $100 per hour per resource, it cost $800.

At the conclusion of UAT another two meetings were held, with an additional user representative (senior stakeholder) and our project manager. This meeting lated an hour, and therefore cost (in our simple model) $1000.

Now when the production defects were discovered, on Tuesday we had several more meetings with between 12 and 18 attendee's costings $1200-$1800 per meeting.

As can be seen in the table below, the cost of the defect triage meeting increases by 500% by the time we were triaging the defects found in the production environment!



The columns on the right hand side of the table above records the number of defects discussed at each of the meetings (made up of course!), and as you would expect the number of defects discussed at each of the meetings decreases. However the most interesting point is that the triage cost per defect spirals to be 37.5 times more expensive to triage the defects found in production....

My final point is note that the figures described above don't include ANY actual coding/configuration or testing effort - so it's easy to see how the costs spiral upwards!

Test early, test often. Make the smart choice, test wisely and make an early investment to fix defects rather than accumulate the technical debt :-)

Sunday, July 11, 2010

Certification in software testing and a drivers license...

Recently I have been thinking about certification of software testers, for no particular reason other than it's been on my mind. I've been thinking along the lines of 'what lessons can certification learn from the evolution of the driving test?'

So it came to me very late one night, the process of gaining a certification in software testing has many similarities to gaining a driver’s license. Regardless of the level of the license (Learners, Provisional or higher) it's just the beginning of greater learning (through experience and further training).

If you think back to when you first got your license, it would have involved some study (hopefully!) of the road rules, a test -maybe written, online or practical; or all combination of all of three?

After you 'made the grade' you were allowed with some varying level of supervision on to the open road. Yes, the police and road rules are a form of supervision!

So, what does this have to do with software testing? Well I've come across many a tester who has a certification and believes that that puts them ahead of the pack. But if we compare this to the newly licensed driver - in the same way the certification identifies you as having a 'known' level of understanding in the subject area. It doesn't mean you know it all, and it most certainly doesn't mean you shouldn't continue learning :-)

The other aspect of the driver’s license analogy I thought a lot about was how the process of getting a license has evolved. I recall a conversation with a developer some time ago when I was going through the process of getting my motorbike license. He said back in the day when he got his license the process involved meeting up with the local policeman and demonstrating that he was able to start, stop and turn the bike. The final part of his test was conducted in a car park which had a gravel surface - just to mix it up little! All in all this process lasted about 30 mins. When I compare this to my experience they are worlds apart. The process I went through to get my learners involved a full days training - the quarter of which we didn't even sit on a bike, let alone ride it. Once I passed that course I was allowed to ride a little bike, at a restricted speed - I now had my learners.

So the point is? Well the evolution of the license tests, including extended learning before the granting of a license is traced directly back to the correlation between the level of driver training and the frequency of accidents. I believe that sometime in the future, that certification in software testing will evolve to be more practical as we realize that being certified doesn't guarantee testing results.

It's my belief that as an industry embracing certification how we need to evolve our thinking and take some lessons from other industries and certification processes. The lesson I'd like to see taken up are that newly certified testers should be paired with an experienced mentor to help them grow into polished professionals :-)

Saturday, June 5, 2010

What a way to celebrate all that is testing by going to watch one of the hardest and most physical types of testing there is - an international rugby test match! As the temperature dipped to a chilly 7 degrees, a group of K.J.Ross & Associates staff, partners and clients made the small trek to Bruce stadium. It's the first time the Wallabies have played in Canberra for several years and it was great to be a part of it.

The rumour on the radio in the morning was that the hotel where the Fijian team was staying had run out of blankets! Luckily for Michael Larsen and I, one of our guests, who happened to sit between us was a die-hard rugby fan who'd sat through many a cold rugby encounter. He waited about 5 minutes and then produced a blanket of his own and kindly offered to share - thanks Pete! One of the other guests also showed some good early form, pulling out a stubbie cooler, or warm hand preservation devise - sadly in this instance there wasn't enough to share :-(

Unlike the teams we were watching, the starting line up for team KJRA had several last minute changes! We lost one of our original guests to injuries sustained during a half marathon! and another with family commitments. Luckily, like all good teams we had depth to call on and the final team was decided about 60 minutes before kick off.

The first half was an arm wrestle with the wallabies only slightly in front at the half time break (14 - 3). The half time break entertainment was hardly noticed by team KJRA. Instead of oranges we opted for hot chips! really a MasterCard moment.

The second half was not so close, with the wallabies clicking into gear and running away 43 points to 3.

Needless to say a great time was had by all :-)

Monday, April 12, 2010

Interesting quotes from the books I've read recently...

I've not long finished 'The Speed of Trust' - by Robert M. R. Covey. I came across this book by referral (of sorts). I was attending a session at the KJRA Summer School, presented by Dr. Mark Pedersen on 'Test Project Management'and Mark referred to this book to demonstrate a point he'd made.

I can't remember the specific point, but I liked the sound of the book, and identified through my own experiences (professionally) were my managers had just trusted me to 'do testing'.

I'm convinced (imo) that within the software development industry, testing is still considered a 'dark art' and/or a 'necessary evil' and therefore not well understood (a topic for another time)! So trust in the test manager and the testing team is critical to success. So often the stakeholders we test on behalf of, take our word (trust us) on the assessment of the defects we've found, and the results of the test cases we've run.

I also found myself nodding my head and agreeing out loud when the book described the effect of high and low trust on speed and cost of everyday transactions, and the notion of 'trust taxes' being applied through either lower speed or increased costs - all true.

Here's a couple of quotes I noted, and the reasons why they made sense or I identified with them - six (6) to be exact:

1) "You should not be satisfied with being a victum, nor with being a survivor. You should aim to be a conqueror." Dr. Laura Schlessinger

Recently I worked on a gig where I was so not seeing eye to eye with all whom I have should been and it made my job soooo much harder to do. It was a lowest of low trust environments, I an outsider - even worse, a contractor - oh no. But anyway the role nearly broke me, but it didn't kill me and I am certainly stronger for it. At first I thought I just survived, but then after returning I found out that I was part of a watershed in perception, and assisted to change the direction for the better. I liken my influence to a tug boat assisting a large ship change course, the ship with its rudder fully starboard, will eventually turn around, but with a little tug boat pushing at the bow, it turns a lot quicker!

The concqueror bit came through another conversation which went something like "the boss said in a meeting, I want reports like Andrew used to give me, ones that actually give me information... and I also want them daily like he used to do!" I thought, that's great after all the resistance I encountered obtaining the data for those reports, tis great to know that now they have to do it my way!

2) In reference to training staff - Question: "What if you train everyone and they all leave?" CEO Response "What if we don't train them and they all stay?" - anon CEO.

It's often been a discussion topic, about the risk involved if you train up the young talent and then watch them walk out the door, and the same is said for contractors in an organisation - they should train themselves. But how does this assist your organisation to grow and become more efficient? It's an interesting point, all to often I've seen staff leave because another company has offered/promised better training and/or options for career progression. My personal experience has been that by allowing staff to go on training has been win, win. The staff have gained some skills, and that means I can push them into areas where I couldn't previously...

3) "we all make mistakes. If you can't make mistakes, you can't make decisions." Warren Buffett.

This is a great comment, all decisions envolve risk and if people are not empowered to take some risks then there is little chance of reward.

4) "There are no facts, only interpretations." Friedrich Nietzsche

I've always been of the opinion that there are three (3) sides to a story, his, mine and the truth somewhere in the middle. This quote challenges that view a little. I think it might also be equally as true as the "There are no defects, only interpretations of software features!"...

5) "We judge ourselves by what we feel capable of doing, while others judge us by what we have already done." - Henry Wadsworth Longfellow,

Everyone who's ever been knocked back for a job at the 'next level up' knows how one feels...

6) Tom Watson "If you wanted to increase your success rate, double your failure rate."

This reminded me about an Agile development comment I heard once, in terms of failing it was 'Fail fast, Fail often, Fail better' and was along the lines "If at first you don't succeed, try again".

There was so many more pages that I folded over, with highlighter or pen underlining just like I used to do while studying at university, but these would have to be the top 6.

Monday, April 5, 2010

New(ish) Testing books... Part 1

Whilst preparing to present a course recently I stumbled upon several testing books that are relatively new (published 2009). The first book ' Exploratory Software Testing' is the latest (?) release by James Whittaker, author of titles such as 'How to Break Software' and 'How to Break Security Software'

Initially I came across this book late at night while watching the keynote presentation from StarWest 2009 on stickyminds.com. Loving the ideas that James presented in the keynote, I searched the web, and ordered the book that same night (well it was early the next morning by then!).

I have mixed feelings about this book, I love the metaphor 'Tours' that James describes as the basis of the testing approach he implemented at Microsoft, and then Google. It's (the metaphor) great, because everyone one has travelled and been on a tour of some sort - be it a school trip or an overseas adventure. This means that instantly when speaking to someone about creating a 'highlights tour' of their application there is a connection and mental picture created.

It was the definition of the tours, there derivation that I thought that the book would have gone into in more detail. The webinar touched on how the tours where created, and the book gives a few paragraphs to each of the established tours, but didn't go into much further detail (that I could find).

I understand each application is different so therefore each time a tour is created it will be unique. But I was expecting some more detail on James' experience in creating the tours. Did they whiteboard the tour outline and then overlay the 'stops' or 'highlights' of the application they were testing on it? Or was it in reverse were all of the application functions identified first and then categorized?

One of the thoughts I had, was this the intent of the book was expose the thought process and idea, rather than be text book with specific examples... Anyway, I'd recommend this book for any tester it's covers some really interesting topic related to exploratory testing, and testing in general. James' vision for the future of testing is very exciting!

ISBN-13: 978-0-321-63641-6

Sunday, March 21, 2010

Test estimation hokey poky

Recently I've been involved in estimating the testing component of several tender responses. As is usually the case, when responding to tenders, the amount of information you have to work with is minimal. This is not just restricted to tenders, as I recall being involved in the estimation of testing for projects in the proposal stage where the business case is all I had to work with. So as with all estimation activities there are assumptions that have to be made, and importantly declared in your estimation model.

In this post I'm focusing on how I estimated the testing effort for a tender, for which I had access to an immature functional Performance specification (FPS). The FPS documented each of the system requirements, and also had an annex which detailed a series of fleshed out business scenarios(usecases). Each of the use cases outlined via a process diagram the main steps in the 'happy path' and also defined the most likely (but not every) alternate path. This information, even though immature and incomplete proved invaluable with constructing my testing estimate model.

The way that I approached the estimation was using the assumption (well educated & researched guess) that each of the requirements would need at least one test case to be created in order to verify its compliance. I also allowed for an additional test case per alternate path identified in the use case. This brought the number of estimated test cases out at 160. History tells me that (if we win) once we start the test analysis and design there will be instances where several requirements are covered by a single test, and other requirements that will demand several tests.

Of course, in the situation were you are estimating for a known application, then there are heaps of metric's surrounding test cases and requirements that you should be able to draw on to assist with your estimation.

Next, I calculated the amount of time it would take to analyse, design, document and verify the test cases (on average). Based on a verbal description of the system we'd be testing, and using all the information I could find I determined that 2 hours per test case should be enough. And this is where the hokey poky started!

Some of the estimation team disagreed with my estimate, stating that its not possible to analyse, design, document and verify a test in 2 hours. I agreed to disagree, as there are a number of reasons why you might not be able too, but equally as many reasons why you certainly could it all depends on the complexity of the requirement being tested and the system implementation and so on. So using the principal of 'you can never have enough time to test, the team then agreed that 4 hours or 1/2 a day would be a more palatable estimate, and so I increased the estimate...

Testing is like a balloon being filled with air, the more air you put in the bigger the balloon. Similarly if you let some of the air escape the balloon decreases in size and takes up less space. Determining what the 'optimal size' of the balloon all depends on where its to be used, and its purpose.The same can be said for testing it all depends on the type of testing, and what risks you are trying to mitigate. In the event that the balloon bursts, I believe it more a case of poor management of the testing process. By trying to squeeze to much testing into a confined time box, BANG, the result can end in a catastrophe. It's often not the testing that suffers, its the quality of the application released. One of the biggest lessons I learnt when first estimating testing, was the testing schedule had to include time for defect remediation. I found it necessary to include because all to often the dev team's schedule finishes once the application is delivered to testing!

Back to the hoky poky, "my estimates in" and then after having submitted the initial adjusted estimates, the review team came back to say the cost of testing exceeded the cost of development and its a COTS product! They asked if there was any activities that we could cut back? LOL, I said well we could go back to the original estimates I gave? Of course this slashed the estimate by 50% which (due to the original doubling!). The estimate was resubmitted.

Just like the hoky poky, my estimate when in, it came out, back in again and then I had to shake it all about! Now we have to win the rest of the work to see who was closed at pinning the tail on the estimation donkey! stay tuned.

Tuesday, February 23, 2010

Defects are requirements?

It was a throw away line over a beer "Defects should be written as requirements..." The context of the conversion was talking about delivering a testing training course and the topic of defects.

At the time I had "Ding" moment where I thought its such a simple concept. All testers understand that defects are important, but as possibly the only tangible outcome of testing we should give them more focus.

When I think of the conversations about testing I've had of late at go, no go meetings, almost all have glossed over the test results and focused on the outstanding defects. If I calculated the amount time I spent translating and interpreting what the functional impacts were, the associated effects on users and risk to data - well it would be days. I think that if I approached the documentation of defects with much of the discipline I'd expect to find in the documentation of requirements then I'd be that much better off.

Speaking of requirements, there is a lesson we (as testers) should learn and its one we try to teach on a daily basis. The quality of requirements has a major impact on the quality of the system built and the testing conducted. Well, our defects are mini specifications on how particular system function should work and its content is of poor quality, then the probability that the fix implemented will be sub optimal is increased.

Lastly, reason for making an effort to document defects thoroughly, is that one day down the track you might have to retest or reproduce the defect. If you've only entered scant on details in your hast to raise the defect, then it makes your job that much harder! I know I've had many instances where I've read a defect report that I'd raised and had to scratch my head to fill in the gaps I left.

Friday, February 19, 2010

It's the People; It's Always the People

An email arrived in my inbox I came across an email announcing the conference program and this particular line caught my eye. "It's the People; It's always the People" which turns out to be the title of a Key Note presentation to be presented by Johanna Rothman at the Better Software Conference 2010.

Without even reading the abstract of the presentation I was in violent agreement. So far through my career I've found that the development of relationships with those whom I need to work closely with, and those who's services I consume invaluable. In many instances its been the relationship with people which has enabled me to achieve my goals, and the lack of relationships which as hampered my progress.

All relationships have a foundation build on respect, trust and symbiotic needs; All relationships require maintenance as they change over time. The ability to form and maintain good working relationships is an essential skill for any testing professional, and is a key factor in success....

Wednesday, February 17, 2010

A dartboard and levels of testing?

"We need to create a regression test matrix" is a statement that I have heard several times. In conversations prior to todays I've nodded wisely and agreed, but not acted. Today though I nodded and smiled, there may have been even a little bouncing!

Why? I hear you ask well today a potential solution appeared before my eyes whilst I was trying to solve another problem.

Taking a step back, today I was tinkering with various diagrams to display the different levels of our manual, functional testing. The diagram was to be included in the release test plan. I created three (3) models which displayed Acceptance aka ‘Full Set’, Regression and Sanity testing covering different proportions of the system...

















The models are simple in construction and there purpose is to highlight the differences coverage and that none of the testing covers ‘all’ of the system functionality.

The relative size of the objects in the diagram, though not explicitly, gives an indication of effort to complete that level of testing. Relatively speaking the full set of tests is somewhere around 50-100% larger than the regression suite, which is 50-100% larger than the sanity test suite.

At this point I discounted the excel pyramid graph and started to focus on the boxes and circles. I feel that both pictures provide a reasonable representation, but there was something about the circles that I kept coming back too.

Fast forward an hour and I’m into the weekly status meeting and due to our recent production release and impending production patch cycle the “we need a regression test matrix” comment was made. We discussed the matrix should take the form of a excel spread sheet and the need to conducts tests across the system as well as focus on the areas of change was required. My eyes began to widen! Circles was the choice and I thought the easiest way to explain where we’d go into more detailed testing was to draw smaller circles around functionality and then bang, I hit the bulls eye so to speak. If you picture a dart board it’s made up of wedges (20), each with two large sections and two bands.

So to my original circle diagram I added several wedges, as I already had the rings (test levels) present. The result is shown below. Each of the wedges represents a slice of system functionality or grouping of functionality.

Now each time it comes to patch testing I can plot which areas of the system are going to be tested more thoroughly using a set of darts, along with a steady hand and intense concentration.

Thursday, February 4, 2010

One from the archives

Have you ever been asked ‘why do you write test cases?’ what’s your answer? There are several that spring to mind; because that’s the way we test software, because the contract said we had to, the customer demands we do, for auditing, so they can be given to the testers to execute… My believe is that regardless of the reason you think you write test cases, the fundamental reason behind test case documentation is to build and capture knowledge about how a system (or function) should work.

A colleague and I were once debating the merits of getting test cases peer reviewed, his position was that the peer reviews provided demonstrable value to the QA process and mine was that peer reviews provided questionable return on investment! After what seemed like a rather long time, the point came were my colleague said ‘well why do you think like that?’ The answer I gave was this ‘well, I believe that the process of writing test cases is how the tester becomes one with the system, learning about what its meant to do, how it should do it and so on...’ And therefore the reason that I question the value of a peer review is that more often that not, the reviewer doesn’t have enough detailed knowledge of the system or functionality to critic the test cases.

To which my colleague replied ‘that a bit Zen Drew!’

From that moment on I’ve seen the process of developing tests as a learning exercise! Needless to say that many years down the track I have used this apifenany as the foundation of my approach to software testing. I still believe the fundamental reason for documenting test cases it to acquire the knowledge about the system so that when it comes to execution the tester is able to evaluate the output and determine if it’s valid.

I’ve used these fundamental believe to underpin the methods and processes I’ve implemented at various sites, and no more so than when I started a new job at an organization that uses ‘Agile Principles’. My introduction to Agile was rather confronting at the time, coming from a predominantly ‘waterfall’ back ground.

Being somewhat new to ‘Agile-iterative’ development I search the internet and text books reading about Agile and principles of developing software using these methods. I learnt very quickly that it’s important to note that no two implementations of Agile are the same, Agile is a philosophy which embraces’ certain values. So when I talk about our Agile, it’s going to be different to your Agile and also different to their Agile, but look beyond the detail at the bigger picture about the process of learning.

The most important lesson I learnt rather early on in this new job was who to turn too if something didn’t make sense, sometimes it was the development team and other times it was the business customer. This isn’t usual – in every project I’ve worked on there was a (or several) key people whom where the fountain of knowledge, the difference here is that there was no specification document to scribble on! It’s not that documentation doesn’t exist, rather that it doesn’t exist in the same format as most people are used too! In many ‘Agile Shops’ the specifications are living – contained in stories that grow as the system grows – this is a story for another time. As the months past and I came to terms with ‘Our Organsiations’ development model I found myself questioning and comparing, trying to align the processes we were using with the ones that I’d used in previous testing engagements’. Other than conducting the testing of our applications I was also attempting to teach a new tester the fundamentals of testing, trying to apply the teachings’ of my classically trained testing methods in a world where the fundamentals’ are rather different. It was through this teachings that I came to the following conclusion; the differences in our development model when compared to the previous testing models was that 1) the time to learn the system (i.e. develop test cases) was far reduced and 2) the primary source of information for the learning’s was from the master’s within the project.

And so was born the analogy which I use regularly, in regards to learning the system I like to describe the difference between test case development in an Agile type in environment is that the knowledge is acquired through the learning by doing, somewhat hands on akin to an apprentice learning through an apprenticeship. The similarities are many, an apprentice begins there learning’s with an idea about the job they are going to be doing, and then is set small tasks by the master. As the apprentice learns and grows as a trade’s person, the master increases the complexity of the tasks and the apprentice is more capable to complete the tasks. The hands on learning is supplemented by small (in comparison) structured teachings, which are then reinforced on the job. Testing within our development process involves a very similar structure, at the commencement of the iteration the testers gain an understanding of the features that are to be implemented. Then as the iteration progresses and the stories begin to be delivered the tester tests and learns, then tests some more. The job that the tester does is overseen by the business customer and further lessons are provided by the developers. As the system grows, so does the testers understanding. The other important point to note is that through the apprenticeship, the number of text books studied and volume of notes created is minimal and when you compare this to the number of texts and volume of notes that a University produces and the picture is somewhat complete. You see like a University student, a tester on a waterfall type project the tester attends design & information sessions (like lectures) takes notes and then studies the design documents (text book or course notes) and then produces test cases (assessment tasks). During the design and build stage (semester), this process is repeated several times and then when it comes to test execution it’s when the testers knowledge of the system is tested (Exams!). One other important comparison is the ratio of ‘students to teachers’ and apprentices to masters, in the famous words of Jedi Master Yoda ‘Always there are 2, no more, no less, a master and an apprentice…’ in our development model the number of people involved is much smaller to ensure the lines of communication are short and fast. Compare this with the university model where each lecturer may have anywhere up to 100 or more students and several tutor’s to assist with the learning. An interesting point.

So to summarize, if documenting test cases is about learning the system, then the some of the differences between Agile and Waterfall testing environments can bee seen as the differences in learning philosophies used in Universities and apprenticeships. Focus on the process of learning, not the outcomes that could be achieved through the learning method.

Tuesday, January 19, 2010

KJRA Summer School 2010, come fly with me...

Great food, great company and thought provoking discussions what more could you ask for?This was my first KJRA Summer School and for me it brought back many memories of my formative years attending a boarding school in western NSW. The location of summer school was the 'The Women's Collage' at the university of Queensland, which is steeped in history. The halls of residence where we stayed opened in the late 1950's and reminded me of the accommodation I lived in whilst completing year 12. At the time, the year 12 accommodation was the pinnacle, single rooms with a balcony. Oh I waited patiently "doing my time" for 5 years to ascend into "Hindmarsh". The memories of hot summers night with only a fan for cooling, sleeping without any covering flooded on back. However the most confronting thing about attending summer school at a women's collage was that in the bathrooms there was full length mirrors, and round mirrors over the sinks which didn't just reflect, but magnified ones face so that all the blemished became blindingly obvious! Back in the day, the bathrooms where I went to school had only a single mirror over one of the sinks for those of us that needed to shave! So being confronted with ones unshaven, just woken up reflection was a memory I'd rather not have gained!

I do love the boarding house/summer school/residential school setup where the focus is on community and communal activities. At school, as at summer school you never ate alone there was always someone too share a conversation at meal time. Talking about the events of the day, sports or conquering great plans for the nights activities! and that's one of the other great things group activities that inspire laughter and cooperation. Back in the day we played pool in the common room, or went 'out bush' into the forest in our younger years building cubby houses for all manner of adventure, and during summer swimming in the mighty Murrumbidgee river was a great way to pass the time and keep cool. At summer school some of the old favourites were played the egg and spoon race, sack races and the three legged race began the evening. Then we fast forwarded to what seems to be the in thing at the moment with Guitar Hero - World tour. After several rounds, each increasing in difficulty Todd was crowned the guitar hero hero! That was Thursday night, Friday night was 80's night and it was fabulous to see some of the attendees dress up! Oh the fashion crimes that occurred in the 80's :-)

I finished summer school on Saturday, checking out of my room which incidentally bore a plarke on the door frame "The John Graham Miller room, endowed by Mrs Rina Miller" which I pondered for some time. It wasn't the only room to be named after a bloke oh if only those walls could talk!!!

I sat in on the Test Project Management course for the morning which was the source of much inspiration on the trip home! On Friday whilst delivering the Agile Testing Course I quoted a Prince2 trainer (somewhat strange I know given some peoples views on Prince2 and Agile) that I'd heard say "The Project Management should be like a modern day pilot (with all due respect) in that he/she should plan the flight, get the plane ready, take off and then switch on the auto pilot - monitoring the progress at key points making corrections as necessary. Then plan the landing, land and disembark the passengers". In the context of Prince2 it made a lot of sense and on the flight back from Brisbane I juxtaposed some of my experiences as a test manager to that of a pilot, so come fly with me.

Thinking of myself as a pilot is an interesting thought and gives a much better visual than when I tried to explain the role of testers (many years ago) to a group of developers as 'life guards' to which they replied "in speedo's? LOL". But anyway, my comparison starts with the pilot receiving instructions of the flight that he/she is to take charge of today. I imagine there are several key details that the instruction contains: the aircraft, destination, departure time, flight crew, number of passengers and expected weather conditions on route. In my experience as a test manager I've received similar briefs about the project I'm going to be managing; intended release date, commencement date, testing resources allocated, scope of testing and warnings about the political landscape!

After receiving the flight instructions even the most experienced of pilots will research the route, weather and all manner of other details (assuming the flight assignment details are allocated several days in advance of the flight) steadily building the plan of the flight. On the day of the flight, the flight plan will be finalised with details of the flight path, check points, , altitude, number of passengers, expected take off weight, boarding gate allocation, expected boarding time, flight deck and cabin crew assignments.

In a test management context, this period equates to the period after you've received the initial project brief through until your troops begin arriving on the ground (covering the development and documentation of a master test plan). You will have discovered all manner of details about the project, including who the key stakeholders are, key stage gates (i.e. change control windows), allocated manual, automated and performance testing resources, environment availability, development and requirements status. All of these (and many more) elements are all considered in the development of a test plan, and each influences how the plan is implemented.

Now with the plan matured it's time to implement. In the case of the pilot, a pre-flight briefing might be the vehicle to bring all the aircraft crew up to speed. Then it's off to the boarding gate to commence the pre-flight checks. The process of getting the plane off the ground involves numerous activities, in my comparison I've focused on the following; the boarding of the passengers (Test Cases), load the catering (Test Data), Storing the luggage below and most importantly load enough fuel (defects) to complete the flight! To me, the process and activities surrounding taking flight are somewhat similar pre-requisites for commencing testing. Ideally at the commencement of a testing program the parameters would be as well defined! Can you image knowing at the commencement of testing outer limits of scope and duration? With a plane its absolutely defined, as a plane can only fit X passengers, has a maximum take off weight and maximum distance it can travel, therefore limiting overall capacity.

The nature of testing often prevents such well defined boundaries, but that's not to say we shouldn't have a detailed understanding of the testing we are about to undertake. In my comparison I've compared the passengers to test cases, and the reason being is the success of a flight can be measured in terms of did it arrive on time which is a quantitative measure, and perhaps more importantly were the passengers satisfied with the quality of the flight in terms of service, comfort and timeliness. Test cases also have quantitative measures such as 'were they all run' which by its itself doesn't provide enough depth to make a definitive statement about success. One needs to look deeper into the results, how many passed, how many failed and what was the magnitude of the failure? More on that later.

Once all the passengers, luggage and catering is loaded its time to depart! Even as a test manager I still get excited when a new build arrives in the test environment, its like the anticipation of taking off once you've boarded a flight you know you ever so close to taking off and you just want to be in the sky! Once clearance to taxi is granted we push back from the gate and taxi to the runway. At this point in the testing process I like to have the test strategy signed off, test cases reviewed and accepted - essentially all my 'Entry Criteria' met so testing can commence. The take off, by far one of the most exciting or scary parts of the trip, as the plane accelerates down the runaway and the G-force pushes you back in your chair and then the point where the plane lifts off the ground - I'll never get tried of that feeling! Often there is a little bit turbulence or bumps as the plane assents into the sky, in the same way as the first delivery of the 'build' often experiences some difficulty being deployed into the test environment. And once your in the sky and the seat belt sign goes off you know you're on the way... This is the equivalent of getting the green light to start testing and so it begins!

Now we're in the sky climbing to the cruising altitude and constantly monitoring all manner of things - weather, wind, cabin pressure and fuel consumption. Fuel consumption I found has some interesting similarities to defects - what tha I here you say. But think about it this way, as the requirements and code are developed defects creep in and steadily build up, then once testing commences (at any level) these defects are discovered and removed. Compare this to the fuel for a plane. The fuel required to make the flight is loaded pre-take off, then as the flight progresses the amount of fuel consumed increases to the point where by the time the plane lands the fuel load is minimal. This is exactly the trend that we want to observe with our testing, that at the beginning of the testing cycle the defect detection rate is high and then as testing progresses the rate decreases, corresponding the number of residual defects decreasing. After all defects are the fuel of testing! While there are defects there will be testing :-) it's not a perfect model, but I like it.

Whilst in the air the passengers consume the catering and the plane steadily makes its way to the destination. The pilot (or co-pilot) monitors the progress along the planned flight path making adjustments as required and all if fine. Then out of no where, that clear trouble maker of the skies strikes, turbulence and has the capacity to effect all on board. The pilot has several options depending on the severity encountered, turn on the seat belt sign, adjust the altitude and/or the flight path (as happened on my flight). In terms of testing, turbulence could be equated to a change in scope mid way through testing, or test environment issues or requirements or code instability. Any of these (and many more) issues often result in the test manager having to make adjustments to the plan! and its just a natural part of the process.

Newtons Law states "What goes up, must come down" and as the destination draws closer its' time to prepare the cabin for landing. All passengers need to return to there seats and place luggage appropriately. Throughout the flight the pilot is in constant communication with Air Traffic control, reporting location and status, and as the pilot prepares to descend, permission to land needs to be sort. The air traffic controller can be compared to either the stakeholders or the project manager whom need to be kept update to constantly about the status of testing -also making decisions in regards to ending testing. In terms of testing this is the point in the cycle where the test manager starts to make sure all is in order, all test cases have been attempted, the defect status is published and understood, requirements coverage is assessed and the test report is in draft. As with the take off there are many factors that influence the 'smoothness' of the landing and the pilot must continually monitor and adjust whilst approaching the runway. In this period even minor issues can have disastrous effects, but more often that now the plane touches down with little or no noticeable issues to the passengers. Once safely on the ground its time to taxi to the terminal and disembark the passengers.

Disembarking the passengers is a key 'milestone' of the flight and its at this point where the quality of the flight can begin to be assessed. Flights like testing cycles have both qualitative and quantitative measures of success. quantitative measures like all statistics are open for interpretation and qualitative measures are subjective so its important to gather both for consideration of 'success'. Its also important to have a definition of what 'success' is at the commencement of the project. Many would argue that simply reached your destination is only part of a satisfactory outcome!

With the plane on the ground, passengers and their luggage disembarked the pilot must complete some paperwork and then its off onto the next assignment. As the test manager wrapping up the testing cycle, the test report is usually the way to close out the testing for this project...