Monday, November 29, 2010

Interesting bug?


On the weekend just gone, I was the designated driver, driving my sister and brother in-law plus a few of there friends to a party in Double Bay. When I jumped in the car the fuel light was on, and the trip meter indicated we had a range of 60kms. As the trip progressed the range dropped to 50, 40, 30kms as one would expect. I was a little worried given I was in an area I'm not at all familiar with, but my sister in-law promised me it'd be fine - to quote "with a range of 60kms your in good shape!"
So after dropping them off at the party I started the return journey and the range decreased to the point where it reported the range was 0 kms! I wonder if this is a bug in the trip meter software - as the engine is still running, meaning there is still some fuel and hense some level of range. I'm not sure of the age of this car, it's not brand new and maybe the manufacture has fixed this in the next version? I know that in my car, which is a year old, once the range gets down to about 40kms the trip display simply reports 'Fill with Fuel' rather than a specific value - I guess that's one way of doing it!

Failed Test Cases/Scripts - is there another dimension we should consider?

My thought for the day - just because one test step fails, should the whole test fail?

Today I had cause to ponder the status of failed test cases. I'm sure you've been in the situation were as a test manager you've been asked if the system is 'ready' or 'is testing finished'. I was asked this question today for the umpteenth time. The situation was this, we are currently testing an environment which has been refreshed to the current production baseline (without data!) and we ran our regular 'sanity test' to see if everything was in the correct state. Situation normal. The sanity test set that we run are a high level set of tests that cover all the key functionality and integration points. Often we run this set of tests and there is failures i.e. Exchange isn't working or the search crawl hasn't run. Today when I reported the results back to the Mgt team for some reason the words coming out of my mouth didn't seem to make sense. The stat's were something like 22 passed, 4 failed and we are good to go. short pause, we've raised 3 defects and we are good to go. hmmmm, this is were I started thinking so we've got failed tests and defects but we are still good to go? The fact of the matter is, this is a risk based decision in that the risk of the failures and extant defects in the system causing a loss of functionality or adversely effecting the user experience is low. Still pondering I think that the floor in the presentation of the results (I have) is that all too often we consider the results of the tests somewhat independent of the outstanding defects.

My thoughts today brought me too this conclusion: Each of the test step failures should be given a severity which aligns to the defect which should be raised as a result of the failure. Then at the conclusion of the test (ie when all the test steps are executed) the calculation of Pass-Fail should be an aggregate of the step failures (or not). For each organisation this matrix and algorithm would need to be configured & tweaked but I think it has merit :-)

By reporting on the test status in terms of failure severity I think will bring more meaning to test results (another dimension!). We could further enhance the reporting by assigning each of the test scripts a priority and then reporting on the number of high priority tests with severe failures. Oh the possibilities!

Sunday, November 7, 2010

QC 11 - the saga begins!

I learnt during recently that a new version of HP QualityCentre is available, Version 11 - check www.hp.com. Needless to say I downloaded the new version and set about preparing a new virtual server to install it on. Sounds simple enough, just fire one up and away we go.

Oh no no no, First I couldn't remember the administrator password for the virtual server I set up some months ago for some other new toy adventure! This resulted in deletion of the current machine and birth of a new one to take it's place. This was simple enough except I forgot the lesson that I should have learnt last time - note to self: A harddisk with 10 gig of storage is only enough to install the server OS (Win2008), nowhere near enough for HP ALM to be installed... Or SQL Server which I also found out part way through needed to be installed - doh. By this time I am becoming a big fan of virtualised servers and being able to just delete one, and create a new improved one in a matter of minutes.

Now onto server build three (3) and now we are looking like making some progress. Yahoo, we are past the disk space issue, only to smack into another brick wall - the database doesn't want to know about this ALM product: CONNECTION REFUSED! A quick google and I found out that by default the SQL server install doesn't enable 'Named Pipes' or 'TCP/IP', one of which is required for the ALM installer to connect to the database! Solved that problem with a few clicks of the mouse. I thought we must be close now, but no. The final error was that the service failed to start - I believe (not confirmed) that it was JBoss that couldn't find the 'java virtual machine' which stopped the whole process in it's tracks. Some more googling and using some of the faint memories from when I studied Java programming I managed to set the JAVA_HOME and Path variables to my freshly installed jre6 - restarted the services and hey presto Quality Centre 11 roared into life.

QC 11 has a tricked up user interface, which has a Java Swing feel to it, but I had to install a c++ security thingy when I installed the QC client on the workstation which has placed a small doubt in my mind - maybe it's a .Net web interface????

Here's a picture - what do you think?



So now I'm up and running, feeling somewhat proud of myself for managing to get the installation completed given my technical shortcomings. In the same way that my friends who are infrastructure engineers think that testing is repetitive and mind numbing, well after 3 server rebuilds, 2 SQL installs, 3 ALM and countless service restarts I think it might be a case of those in glass houses shouldn't throw stones!

QC 11 - Sprinter vs CodedUI Tests

As per my earlier post I've started playing with QC, I mean ALM 11, with an upgrade looming at work. I didn't notice when I first installed QC but upon further investigation of 'What's New' I stumbled upon 'HP Sprinter' which is set to revolutionise manual testing (according to the blurb).

My first thought was I've heard it all before with business process testing, automated testing that can be done my non-techie's and alike. However when I watched the promo video I was immediately stuck by the similarities with a demo of the Microsoft's Coded UI Tests I saw earlier in 2010.

It felt like a moment of deja-vu when the presentation began, the image that triggered the memory was when the video splashed up the mercury tours home page and started to enter data. In the Microsoft demo they use a similar website for buying model planes of all things!

On face value both of these 'add ons' appear to offer the same sort of capability in that you can record whilst running a manual test, and the re-use the coded UI test to retest a defect or save time in data entry.

I don't claim to know either of these tools well or have used them other an in a trail, but I like the direction they are moving in...

More to follow.