Thursday, December 30, 2010
QC 11 - Return of the Media....
I was lucky enough to receive a MacBookAir 11 inch for xMas care of my lovely Wife (Yay Babe) and in the process of transferring/consolidating all of my work files from the various Laptops and PC's used to use onto my little shiny MacBook I happened to stumble across a zip file "T7333-15006_1a.zip" in one of the directories. I thought it looked like a HP download and as luck would have it when I opened it it's the install media for QC10 - hooray. Now combining this little fella with the QC10 patch we located earlier and guess the move is back on!
Sunday, December 26, 2010
Testing Utopia
1) Requirements (functional and non-functional) fully derived and understood,
2) Requirements covered by risk appropriate test cases/scripts/procedures,
3) Documented test cases/scripts/procedures executed,
4) Defects discovered either resolved or well understood and documented,
That's the beauty of utopia, its so great, dynamic and unattainable but well worth striving for...
I've reached the point were I have a vision of what this Utopia looks like and how I might need to report how far away we are from it. It involves a data cube which contains information about requirements, tests and defects. Each of the data objects has records attached requirements have links to tests and defects, a priority and risk assessment. Tests have links to requirements and defects, but also records of each time the test was executed (results), priority and complexity rating. Defects have a status, severity, complexity and business impact. I'm sure there is a few things I have forgotten, but they can be added. I think that I have been asked to provide a report based on all of these attributes at one time or another...
QC11 - The next chapter
Monday, November 29, 2010
Interesting bug?
On the weekend just gone, I was the designated driver, driving my sister and brother in-law plus a few of there friends to a party in Double Bay. When I jumped in the car the fuel light was on, and the trip meter indicated we had a range of 60kms. As the trip progressed the range dropped to 50, 40, 30kms as one would expect. I was a little worried given I was in an area I'm not at all familiar with, but my sister in-law promised me it'd be fine - to quote "with a range of 60kms your in good shape!"
Failed Test Cases/Scripts - is there another dimension we should consider?
Today I had cause to ponder the status of failed test cases. I'm sure you've been in the situation were as a test manager you've been asked if the system is 'ready' or 'is testing finished'. I was asked this question today for the umpteenth time. The situation was this, we are currently testing an environment which has been refreshed to the current production baseline (without data!) and we ran our regular 'sanity test' to see if everything was in the correct state. Situation normal. The sanity test set that we run are a high level set of tests that cover all the key functionality and integration points. Often we run this set of tests and there is failures i.e. Exchange isn't working or the search crawl hasn't run. Today when I reported the results back to the Mgt team for some reason the words coming out of my mouth didn't seem to make sense. The stat's were something like 22 passed, 4 failed and we are good to go. short pause, we've raised 3 defects and we are good to go. hmmmm, this is were I started thinking so we've got failed tests and defects but we are still good to go? The fact of the matter is, this is a risk based decision in that the risk of the failures and extant defects in the system causing a loss of functionality or adversely effecting the user experience is low. Still pondering I think that the floor in the presentation of the results (I have) is that all too often we consider the results of the tests somewhat independent of the outstanding defects.
My thoughts today brought me too this conclusion: Each of the test step failures should be given a severity which aligns to the defect which should be raised as a result of the failure. Then at the conclusion of the test (ie when all the test steps are executed) the calculation of Pass-Fail should be an aggregate of the step failures (or not). For each organisation this matrix and algorithm would need to be configured & tweaked but I think it has merit :-)
By reporting on the test status in terms of failure severity I think will bring more meaning to test results (another dimension!). We could further enhance the reporting by assigning each of the test scripts a priority and then reporting on the number of high priority tests with severe failures. Oh the possibilities!
Sunday, November 7, 2010
QC 11 - the saga begins!
Oh no no no, First I couldn't remember the administrator password for the virtual server I set up some months ago for some other new toy adventure! This resulted in deletion of the current machine and birth of a new one to take it's place. This was simple enough except I forgot the lesson that I should have learnt last time - note to self: A harddisk with 10 gig of storage is only enough to install the server OS (Win2008), nowhere near enough for HP ALM to be installed... Or SQL Server which I also found out part way through needed to be installed - doh. By this time I am becoming a big fan of virtualised servers and being able to just delete one, and create a new improved one in a matter of minutes.
Now onto server build three (3) and now we are looking like making some progress. Yahoo, we are past the disk space issue, only to smack into another brick wall - the database doesn't want to know about this ALM product: CONNECTION REFUSED! A quick google and I found out that by default the SQL server install doesn't enable 'Named Pipes' or 'TCP/IP', one of which is required for the ALM installer to connect to the database! Solved that problem with a few clicks of the mouse. I thought we must be close now, but no. The final error was that the service failed to start - I believe (not confirmed) that it was JBoss that couldn't find the 'java virtual machine' which stopped the whole process in it's tracks. Some more googling and using some of the faint memories from when I studied Java programming I managed to set the JAVA_HOME and Path variables to my freshly installed jre6 - restarted the services and hey presto Quality Centre 11 roared into life.
QC 11 has a tricked up user interface, which has a Java Swing feel to it, but I had to install a c++ security thingy when I installed the QC client on the workstation which has placed a small doubt in my mind - maybe it's a .Net web interface????
Here's a picture - what do you think?
So now I'm up and running, feeling somewhat proud of myself for managing to get the installation completed given my technical shortcomings. In the same way that my friends who are infrastructure engineers think that testing is repetitive and mind numbing, well after 3 server rebuilds, 2 SQL installs, 3 ALM and countless service restarts I think it might be a case of those in glass houses shouldn't throw stones!
QC 11 - Sprinter vs CodedUI Tests
Friday, October 29, 2010
Scott24 Mountain Bike Race
Recently I competed in the Scott24 hour mountain bike race, as part of the CSC team. It was a great event and I had awesome time. My times were by no means record breaking but they were competitive. The first lap 1hr 22 mins, catching my changeover partner by surprise as they were sure I'd be another 25 odd mins! This earned me the new nickname of the 'Tele Tubbie powerful little legs". The night lap was 1hr 45mins which I thought was good because I started the lap @ 0115 in the morning :-) My final lap was 1hr 25mins, which surprised me because it felt like I stopped a lot more often than the previous laps... It was great to finish with my Girls waiting with "Go Daddy" signs at the finish line.
Saturday, October 16, 2010
IE9 not supported on XP...
Thursday, September 9, 2010
OzAgile cancelled
- Posted using BlogPress from my iPhone
Saturday, August 21, 2010
A simple example of the escalating cost of defect detection
The columns on the right hand side of the table above records the number of defects discussed at each of the meetings (made up of course!), and as you would expect the number of defects discussed at each of the meetings decreases. However the most interesting point is that the triage cost per defect spirals to be 37.5 times more expensive to triage the defects found in production....
Sunday, July 11, 2010
Certification in software testing and a drivers license...
Recently I have been thinking about certification of software testers, for no particular reason other than it's been on my mind. I've been thinking along the lines of 'what lessons can certification learn from the evolution of the driving test?'
So it came to me very late one night, the process of gaining a certification in software testing has many similarities to gaining a driver’s license. Regardless of the level of the license (Learners, Provisional or higher) it's just the beginning of greater learning (through experience and further training).
If you think back to when you first got your license, it would have involved some study (hopefully!) of the road rules, a test -maybe written, online or practical; or all combination of all of three?
After you 'made the grade' you were allowed with some varying level of supervision on to the open road. Yes, the police and road rules are a form of supervision!
So, what does this have to do with software testing? Well I've come across many a tester who has a certification and believes that that puts them ahead of the pack. But if we compare this to the newly licensed driver - in the same way the certification identifies you as having a 'known' level of understanding in the subject area. It doesn't mean you know it all, and it most certainly doesn't mean you shouldn't continue learning :-)
The other aspect of the driver’s license analogy I thought a lot about was how the process of getting a license has evolved. I recall a conversation with a developer some time ago when I was going through the process of getting my motorbike license. He said back in the day when he got his license the process involved meeting up with the local policeman and demonstrating that he was able to start, stop and turn the bike. The final part of his test was conducted in a car park which had a gravel surface - just to mix it up little! All in all this process lasted about 30 mins. When I compare this to my experience they are worlds apart. The process I went through to get my learners involved a full days training - the quarter of which we didn't even sit on a bike, let alone ride it. Once I passed that course I was allowed to ride a little bike, at a restricted speed - I now had my learners.
So the point is? Well the evolution of the license tests, including extended learning before the granting of a license is traced directly back to the correlation between the level of driver training and the frequency of accidents. I believe that sometime in the future, that certification in software testing will evolve to be more practical as we realize that being certified doesn't guarantee testing results.
It's my belief that as an industry embracing certification how we need to evolve our thinking and take some lessons from other industries and certification processes. The lesson I'd like to see taken up are that newly certified testers should be paired with an experienced mentor to help them grow into polished professionals :-)
Saturday, June 5, 2010
What a way to celebrate all that is testing by going to watch one of the hardest and most physical types of testing there is - an international rugby test match! As the temperature dipped to a chilly 7 degrees, a group of K.J.Ross & Associates staff, partners and clients made the small trek to Bruce stadium. It's the first time the Wallabies have played in Canberra for several years and it was great to be a part of it.
Monday, April 12, 2010
Interesting quotes from the books I've read recently...
Monday, April 5, 2010
New(ish) Testing books... Part 1
Sunday, March 21, 2010
Test estimation hokey poky
Tuesday, February 23, 2010
Defects are requirements?
Friday, February 19, 2010
It's the People; It's Always the People
Wednesday, February 17, 2010
A dartboard and levels of testing?
The models are simple in construction and there purpose is to highlight the differences coverage and that none of the testing covers ‘all’ of the system functionality.
The relative size of the objects in the diagram, though not explicitly, gives an indication of effort to complete that level of testing. Relatively speaking the full set of tests is somewhere around 50-100% larger than the regression suite, which is 50-100% larger than the sanity test suite.
At this point I discounted the excel pyramid graph and started to focus on the boxes and circles. I feel that both pictures provide a reasonable representation, but there was something about the circles that I kept coming back too.
Fast forward an hour and I’m into the weekly status meeting and due to our recent production release and impending production patch cycle the “we need a regression test matrix” comment was made. We discussed the matrix should take the form of a excel spread sheet and the need to conducts tests across the system as well as focus on the areas of change was required. My eyes began to widen! Circles was the choice and I thought the easiest way to explain where we’d go into more detailed testing was to draw smaller circles around functionality and then bang, I hit the bulls eye so to speak. If you picture a dart board it’s made up of wedges (20), each with two large sections and two bands.
So to my original circle diagram I added several wedges, as I already had the rings (test levels) present. The result is shown below. Each of the wedges represents a slice of system functionality or grouping of functionality.
Now each time it comes to patch testing I can plot which areas of the system are going to be tested more thoroughly using a set of darts, along with a steady hand and intense concentration.
Thursday, February 4, 2010
One from the archives
Have you ever been asked ‘why do you write test cases?’ what’s your answer? There are several that spring to mind; because that’s the way we test software, because the contract said we had to, the customer demands we do, for auditing, so they can be given to the testers to execute… My believe is that regardless of the reason you think you write test cases, the fundamental reason behind test case documentation is to build and capture knowledge about how a system (or function) should work.
The most important lesson I learnt rather early on in this new job was who to turn too if something didn’t make sense, sometimes it was the development team and other times it was the business customer. This isn’t usual – in every project I’ve worked on there was a (or several) key people whom where the fountain of knowledge, the difference here is that there was no specification document to scribble on! It’s not that documentation doesn’t exist, rather that it doesn’t exist in the same format as most people are used too! In many ‘Agile Shops’ the specifications are living – contained in stories that grow as the system grows – this is a story for another time. As the months past and I came to terms with ‘Our Organsiations’ development model I found myself questioning and comparing, trying to align the processes we were using with the ones that I’d used in previous testing engagements’. Other than conducting the testing of our applications I was also attempting to teach a new tester the fundamentals of testing, trying to apply the teachings’ of my classically trained testing methods in a world where the fundamentals’ are rather different. It was through this teachings that I came to the following conclusion; the differences in our development model when compared to the previous testing models was that 1) the time to learn the system (i.e. develop test cases) was far reduced and 2) the primary source of information for the learning’s was from the master’s within the project.