Cool new tool – Hexawise

I just checked out a new tool called Hexawise (hexawise.com).  It certainly has peaked my interest!!  In a nut shell, if you are testing an application that has a lot of configuration options, it gives you a simple, clean interface to load all of your config options and then the tool will generate test scenarios for you using the pairwise approach.  If you are not familiar with this technique, check out this link : http://en.wikipedia.org/wiki/All-pairs_testing

At first glance, I thought this would be a really cool test planning tool.  Then, the more I thought about it and played with it, I discovered that I am also able to use this to create my fitnesse test tables (inputs only).  I was able to plug in the input fields and the options, then use hexawise to generate the test cases to ensure all pair combos were covered, and then export that to excel and paste it right into a fitnesse table.  I think the same approach would work for creating data fed into tests in QTP, WinRunner, etc.

I can see that this would save me a lot of time manually thinking through the combinations I want to test with, etc when I am tasked with creating tests for functionality that has thousands of possible test cases (and you can’t test for every scenario).

The tool is not fancy (which I could care less about) and is early in its life but I think there is a lot of great potential here.  Definitely worth checking out and following.  I’ll be interested to see how the tool grows.

Advertisements

Quick, great read on agile testing

This blog post by Simon Baker is excellent and does a great job describing the role of a tester on an agile team.  http://www.think-box.co.uk/blog/2008/05/testers-in-our-agile-team.html

STP Conference Day 2

The morning started off with a Keynote from Robert Sabourin called “What is so Different about Testing in Scrum.”  I wouldn’t say the title of the talk really fit the talk itself but it was a good kick-off for the day.  If you have seen him speak before, then you know he is high energy and enthusiastic. 

His talk essentially gave a brief overview of Scrum and then walked through some case studies of teams he has worked with that were adopting Scrum and what some of their challenges and proposed solutions were.  It wasn’t surprising to hear that the challenges were centered on either lack of product ownership and/or testing team challenges.  I was hoping he would actually dive into some of the key differences in testing in Scrum like the title of the talk mentioned but he really didn’t go there.  No one seems to go there…again, another topic for another time. 

 

I went to 4 different talks.  Two of these talks were topics around agile testing.  I was disappointed by both of these talks in that they really didn’t get into any meat on the topics.  It was all high level principles with no actual techniques or take-aways on what to do and how to do it. 

 

One talk was on metrics.  The speaker was good and had good content for the most part.  However, I would argue that several of the metrics that were discussed really don’t provide any value to an organization.  For example, she mentioned tracking requirements stability as a metric.  Let’s say you do this and you discover that 40% of your requirements change.  So what.  What is that metric going to do for your team or organization?  Requirements always change.  Period.  We all know that…we should all be prepared for it.  Spending a significant amount of time (and therefore money) tracking how many requirements change doesn’t really help you to handle those changing requirements.  Why not invest that time and money into something that will help your team work effectively in the changing requirement world?

 

I ended the day with a talk by Matt Heusser called “Software Testing from the Ground Up.”  The published excerpt on the talk didn’t seem to match the talk itself.  I did enjoy some of the comparisons he made between testers and other professions, though. 

STP Conference – Day 1 review

I am currently in San Francisco at the STP Conference (www.stpcon.com).  I am really starting to clearly see what I consider a divide in the testing world between automated and manual testing that I believe is negatively affecting all development methodologies.  I am currently collecting my thoughts on this for an upcoming blog post….stay tuned J

 

Yesterday was the first day of the conference and was a full-day tutorial.  I spent the day at Michael Bolton’s “Rapid Intro to Rapid Software Development” class.  This is usually a 3-day class that was condensed into one day.  Because of that, the class was a lot of lecture and not as much hands-on as I was hoping for.  That said, I have condensed 2 day classes into 4 hour “overviews” so I know the challenges with that.  I think I would really enjoy the 3-day class much more.

 

Overall, I enjoyed the class and picked up some great tips/tricks/ideas particularly around exploratory testing.  I have attended a few talks by James and John Bach in the past and have read many of their articles, etc on exploratory testing.  This workshop really drove home how this technique works and the value it adds.

Some of my key take-aways from this class were:

 

  1. As a tester, you should report what you see, not what you infer.  Basically, the idea here is that we should report just the facts and not infer what the problem is.  By doing this, we are able to keep our credibility with the rest of the team.  If you document an issue and infer what you think the problem is and that isn’t the actual problem, then the developers may start to think of you like the boy that cried wolf.  Just reporting the facts avoids this issue.
  2. Manual tests should discover things – not prove things.  I love this – especially in the Scrum/Agile context.  Agile teams focus heavily on test automation (as they should).  However, manual testing is still important and necessary.  Manual testing allows you to “discover things.”  Test automation is where you prove things – you prove that the new code that was checked in didn’t break the existing code – you prove that the system is still functioning as it was before.  However, your automation doesn’t discover new things.  You discover behaviors, unfriendly user interfaces, patterns, etc by manual testing.  Both are important.
  3. Asking the right question is more important then getting the right answer.  Food for thought.  This ties directly into the 3C’s of User Story development – card, conversation, confirmation.  During your conversation, you should be focused on asking the right questions.  These questions will hopefully illicit those implied assumptions, those “what-if” scenarios no one had thought of.  What you learn in asking those “right questions” can then be documented in the confirmation (the acceptance tests).
  4. Any bug that slows down testing is a bad bug.  The example that brought this to light in the class was what most teams would likely call a usability issue.  The application we were testing had an entry field and a button to click to calculate the answer.  As a user, you could not enter the value and click the enter key on your keyboard to calculate the answer.  You had to either mouse or tab to the button on the UI first to click it.  This wasn’t necessarily a “bug.”  I am sure developers, testers, and product folks could argue for hours as to whether or not it was a “bug” or a usability issue.  It would like get ranked a low priority since the system essentially worked.  However, this functionality really slowed down testing.  If we could have just entered data and clicked enter, we could have tested more cases quickly.  Having to tab or mouse for every test slowed down my rate of testing.  Testability of the application you are working on should be considered.  This is why I like the concepts of ATDD and TDD.  If the tests are defined before development starts, then the developers can build the system in such a way that lends itself to easy testing.  As testers, it is absolutely our right to ask for testable applications.

 

More stpcon reviews to come as the conference continues……

Book Review: Test Driven .Net Development with FitNesse

I just finished Gojko Adzic’s book called Test Driven .Net Development with FitNesse.  I have used FitNesse on .Net projects and I still learned a ton from this book.  I highly recommend it for anyone working with FitNesse on a .Net project, even if you are already comfortable navigating the waters of FitNesse.  If you have heard of FitNesse and are interesting in learning more about it and playing with it, this is the perfect place to start.  The book is geared more towards developers but is still an excellent read for analysts and testers as well.  You can download the source code behind the examples and follow along very easily.

Thank you, Gojko, for writing this much needed book!

Who gets to decide when to ship software?

I recently got in a lengthy debate with some folks over this very topic.  As I quickly found out, most folks have a very strong opinion on this.  And here I thought I was the only oneJ.  It seems as though most folks fall into 3 different camps: 

Camp #1 – The Testers decide 

This “camp” was formed of folks on waterfall and agile teams (which I found interesting).  They felt that the team that tests the software and was most familiar with the quality and test coverage should get the final say on whether or not to ship it. 

Camp #2 – The Project Manager decides 

This “camp” applied to people on more traditional waterfall projects.  They felt that the project manager knew all of the data points with the project and based on that data should be responsible for determining when to release software.  However, this project managers did not have ROI responsibility. 

Camp #3 – The Product Manager (aka: ”the business”) decides 

This “camp” applied to people that were staffed on more agile projects.  On these teams that had a strong product owner presence, they felt that since the product owner was ultimately responsible for the ROI, then they should decide what and when to ship. 

Camp 1 had the most votes followed by camp 2 and then camp 3 had the least amount.  In some ways this surprised me but in other ways, it was what I was expecting. 

I will admit that I used to be a member of Camp #1.  As a matter of fact, I believe at one point I might have actually been the camp founder and directorJ  However, over time, I have moved to Camp #3.  Let me tell you why. 

Testers typically do have the most insight into the quality of the software.  We are intimately familiar with what is working, what isn’t, and what the high risk areas of the application are.  We can tell you the defect trends, defect turn around time, severity break downs, etc.  Because of this, it is easy for us to feel like we should be the decision maker as to whether or not we should put this code into our customer’s hands.  However, often times, there are several things we do NOT know that also play in to the decision of whether to ship code or not.  Perhaps there is a large business deal that will fall through if we don’t ship.  Perhaps the customer (or customers) wanted the new code so badly that they were willing to take the risk of using it with the list of known issues.  Perhaps the business wants it shipped for demo purposes but isn’t planning on having customers use it until it is cleaner.  Perhaps the business is planning a Beta / trial period.  The list can go on and on.  You get my point.   

Here is what I think about the testers’ role in deciding when to ship software.  I think the tester is responsible for delivering all of the metrics around the quality of the code to the business.  Deliver the facts, not your emotion.  Include test coverage, defect count, defect find rate, etc.  Make sure the business has a solid grasp on the current state of the code and any risks associated with shipping this code.  Then, do the hardest thing possible…..trust that the business team will make the right decision based on those facts and any other data points they may have.

Then, I also firmly believe that the business should CLEARLY COMMUNICATE with the delivery team why they made the decision to ship (or not ship) the software.  This is particularly important if they decide to ship software that the testers don’t feel is ready to ship.  The business should let the team know that they understood the risks but made the decision to ship based on the following facts (at this point, they should clearly list out the reasons why they are shipping).

 If you think a bad decision was made and don’t know why, then by all means ask for an explanation!

Should we be called QA Analysts?

I have been thinking a lot about titles lately and just how much I detest them 🙂  I do love the fact that my current title (QA Architect) is something most people have never heard of so it prompts them to ask me what I really do.  It gives me a chance to explain to folks what I actually do at work, not what they will assume I do based on my title!  Common titles can be very misleading to what people actually do.  The title of QA Analyst or QA Engineer is a perfect example.  I am going to pick on this because I had this title for a long time.  (The same goes for QA Manager).  There are several reasons why I hate these titles.  Here are the top 3 reasons why:

1.   If someone on your team has the title of QA Analyst, they typically bear the responsibility of the end quality of your product.  People start to say things like, “The QA folks will test it…don’t spend too much time testing your code.”  Or, “QA will catch it if there are bugs.”  Since when did the end quality of the product rest solely on the QA staff?  Everyone on your team should be responsible for quality, not just the folks testing the software.

2.   QA Analyst of what exactly?  It sure is vague.  Everyone just assumes it means the quality assurance analyst or engineer of the application you are building.  Why are there people with the word “quality” in their title only on the development side?  What about the quality of the entire development process, the quality of the designs, the quality of the sales team, quality of support?  By putting a “QA” title in development only it seems like we are really missing the boat by not looking at quality across the organization in all roles.

3.   When you tell someone that you are a QA person, you have just stuck a huge label on yourself.  It may be a good label or a bad label depending on who you are talking to.  Some QA folks focus solely on testing.  Some QA folks are really focused on SDLC only.  By telling someone that you do “QA”, you may not be telling them what you really do at all.   

Here is what I would love to see.  I would like to see people that focus on testing called “Testers.”  Nothing more, nothing less.  If the main purpose of your job is to test the application to ensure that it is working as it should, then just call yourself a tester.  People that develop code are just called developers, not “Quality Developers”.  Let’s call a spade a spade here.  If you test, then just say you are tester.  You should not be ashamed of that! 

Then, ideally, you WOULD have someone in your organization that is overseeing quality.  However, this person would be looking at quality across the board.  They would be analyzing processes, etc from the sales team all the way through development and then to the team that supports the customers.   Depending on the size of your company, this could be a whole team of folks.  Make sure these people know what they are doing and have been in the trenches actually “doing it” before.  Also, make sure these folks are trained on the Lean Principles of software development  to get the biggest bang for your buck.

What is your current title?  Do you think it reflects what you really do?