Manual Testers vs. Automation Engineers – Why the Divide?

I just got back from the stp conference in San Francisco (  After attending several talks and speaking one on one with folks there, I started to notice a recurring undercurrent.  There seems to be a divide in the testing community between manual testing and automated testing.  I am noticing more and more that papers, blogs, etc that I read and any talks I attend have either one focus or the other but never combine the two.  For example, the automated community rarely mentions manual testing or the need for it.  Conversely, the manual testing community (especially in the exploratory testing world) seems to really down-play the need and/or importance of automated testing.  I think this is in turn perpetuating a divide in skill sets of testers as well.  Teams have manual testers and then separate automation engineers instead of just having test staff that do both.


In my experience, the real sweet-spot in testing is when you have a combination of both manual and automated tests.  They both serve a real purpose and they both provide a lot of value.  I would never want to be on a project that was either 100% manual testing or 100% automated testing.  The projects I have been on that have been the most successful and produced the highest quality results were projects that used a combination of testing strategies.


Automated testing allows you to get more testing done.  When you have a large portion of your test scripts automated, that frees up time for testers to do exploratory testing that can’t be put under automation.  If none of your tests are automated, then you will likely spend a good portion of your testing time validating happy path, “good” user scenarios and will never get to spend time really digging into the product to test the more unusual scenarios that tend to have hidden defects in them.


The strongest testers I have worked with are ones that can do both.  They are technical enough to get high functioning, easily maintainable automation scripts in place but also spend time using their “soft” skills to manually test as well.


Here is why I think we are seeing this divide in our community: 


  1. Most teams separate their manual testers from their automated testers.  I think this is a recipe for disaster.  When you have a team that only does automated testing, their scripts are often given to them by the manual testers.  Often times, the automation engineers code exactly what they are told to do without any real in-depth knowledge of the product.  Then, because the manual testers don’t really understand the automation or how it works, (and because testers tend to struggle with severe trust issues), the manual testers often spend a large amount of time manually testing the same functions that are automated….just so they can be sure it really works and that the automation didn’t miss anything.  What a waste of time!!!
  2. The manual testing community doesn’t want to learn how to automate.  The idea of learning that technology is scary.
  3. The automation community thinks manual testing is boring and doesn’t want to do it.  A lot of the strong automation testers I have worked with and met come from development backgrounds and really just want to write code.  They have no desire to really “play” with the system to see what they can find.


I think manual only testers have a chip on their shoulders with automation because they don’t know how to do it and are scared of it.  Automation engineers tend to make more money then manual testers because of the required technical skill set – they are treated almost like developers in terms of rank and pay.  Often times, they act like they are more valuable as well.  Not good.  The skills that are required to be a good manual tester are hard to measure so they are often not considered as valuable as developer skills.  This isn’t fair, either.  I think it is harder to train someone how to think like a good tester then it is to train them how to create an automation script. 


When testers are able to play in both spaces….do automation AND manual testing, then they are able to build better automation scripts and spend their manual testing time focusing on those high risk, interesting parts of the application that lend themselves to interesting what-if scenarios.  They won’t waste time manually testing what is under automation because they can trust the automation is doing what it was built to do.  However, finding these people is hard (at least it has been for me!). 


I hope that in the future we can see more folks in the testing community talking about how manual and automated testing can work together and highlight the strengths and limitations of both.  One is not better then the other – they each serve a specific purpose and each provides significant value.  I believe that every project needs and should have manual and automated tests going on all the time.



STP Conference Day 2

The morning started off with a Keynote from Robert Sabourin called “What is so Different about Testing in Scrum.”  I wouldn’t say the title of the talk really fit the talk itself but it was a good kick-off for the day.  If you have seen him speak before, then you know he is high energy and enthusiastic. 

His talk essentially gave a brief overview of Scrum and then walked through some case studies of teams he has worked with that were adopting Scrum and what some of their challenges and proposed solutions were.  It wasn’t surprising to hear that the challenges were centered on either lack of product ownership and/or testing team challenges.  I was hoping he would actually dive into some of the key differences in testing in Scrum like the title of the talk mentioned but he really didn’t go there.  No one seems to go there…again, another topic for another time. 


I went to 4 different talks.  Two of these talks were topics around agile testing.  I was disappointed by both of these talks in that they really didn’t get into any meat on the topics.  It was all high level principles with no actual techniques or take-aways on what to do and how to do it. 


One talk was on metrics.  The speaker was good and had good content for the most part.  However, I would argue that several of the metrics that were discussed really don’t provide any value to an organization.  For example, she mentioned tracking requirements stability as a metric.  Let’s say you do this and you discover that 40% of your requirements change.  So what.  What is that metric going to do for your team or organization?  Requirements always change.  Period.  We all know that…we should all be prepared for it.  Spending a significant amount of time (and therefore money) tracking how many requirements change doesn’t really help you to handle those changing requirements.  Why not invest that time and money into something that will help your team work effectively in the changing requirement world?


I ended the day with a talk by Matt Heusser called “Software Testing from the Ground Up.”  The published excerpt on the talk didn’t seem to match the talk itself.  I did enjoy some of the comparisons he made between testers and other professions, though. 

STP Conference – Day 1 review

I am currently in San Francisco at the STP Conference (  I am really starting to clearly see what I consider a divide in the testing world between automated and manual testing that I believe is negatively affecting all development methodologies.  I am currently collecting my thoughts on this for an upcoming blog post….stay tuned J


Yesterday was the first day of the conference and was a full-day tutorial.  I spent the day at Michael Bolton’s “Rapid Intro to Rapid Software Development” class.  This is usually a 3-day class that was condensed into one day.  Because of that, the class was a lot of lecture and not as much hands-on as I was hoping for.  That said, I have condensed 2 day classes into 4 hour “overviews” so I know the challenges with that.  I think I would really enjoy the 3-day class much more.


Overall, I enjoyed the class and picked up some great tips/tricks/ideas particularly around exploratory testing.  I have attended a few talks by James and John Bach in the past and have read many of their articles, etc on exploratory testing.  This workshop really drove home how this technique works and the value it adds.

Some of my key take-aways from this class were:


  1. As a tester, you should report what you see, not what you infer.  Basically, the idea here is that we should report just the facts and not infer what the problem is.  By doing this, we are able to keep our credibility with the rest of the team.  If you document an issue and infer what you think the problem is and that isn’t the actual problem, then the developers may start to think of you like the boy that cried wolf.  Just reporting the facts avoids this issue.
  2. Manual tests should discover things – not prove things.  I love this – especially in the Scrum/Agile context.  Agile teams focus heavily on test automation (as they should).  However, manual testing is still important and necessary.  Manual testing allows you to “discover things.”  Test automation is where you prove things – you prove that the new code that was checked in didn’t break the existing code – you prove that the system is still functioning as it was before.  However, your automation doesn’t discover new things.  You discover behaviors, unfriendly user interfaces, patterns, etc by manual testing.  Both are important.
  3. Asking the right question is more important then getting the right answer.  Food for thought.  This ties directly into the 3C’s of User Story development – card, conversation, confirmation.  During your conversation, you should be focused on asking the right questions.  These questions will hopefully illicit those implied assumptions, those “what-if” scenarios no one had thought of.  What you learn in asking those “right questions” can then be documented in the confirmation (the acceptance tests).
  4. Any bug that slows down testing is a bad bug.  The example that brought this to light in the class was what most teams would likely call a usability issue.  The application we were testing had an entry field and a button to click to calculate the answer.  As a user, you could not enter the value and click the enter key on your keyboard to calculate the answer.  You had to either mouse or tab to the button on the UI first to click it.  This wasn’t necessarily a “bug.”  I am sure developers, testers, and product folks could argue for hours as to whether or not it was a “bug” or a usability issue.  It would like get ranked a low priority since the system essentially worked.  However, this functionality really slowed down testing.  If we could have just entered data and clicked enter, we could have tested more cases quickly.  Having to tab or mouse for every test slowed down my rate of testing.  Testability of the application you are working on should be considered.  This is why I like the concepts of ATDD and TDD.  If the tests are defined before development starts, then the developers can build the system in such a way that lends itself to easy testing.  As testers, it is absolutely our right to ask for testable applications.


More stpcon reviews to come as the conference continues……

What makes a ScrumMaster a “good” ScrumMaster?

I was recently asked to help form a job description for a new ScrumMaster (SM) position in a company.  So, it got me thinking about what qualities and skills make up a good SM.  I thought back on teams I have been on that had good and some not-so-good SMs.  Most of the skills that come to mind when I think of a good SM are more of the “soft” skills.  Here are the ones I came up with: 

Intimately familiar with Scrum.  Since the SM owns the process, they must be intimately familiar with how Scrum works and be able to guide a team to find solutions to help them succeed.  This takes experience.  Most SMs come fresh out of CSM training and jump into the role.  The challenge here is that they often do not have an experienced SM available to mentor and coach them.  If at all possible, hire someone with proven experience or get help from an experienced coach. 

Excellent facilitator.  SMs organize and facilitate several of the Scrum meetings.  This requires organizational and excellent facilitation skills.  If you are a SM today and are looking for help in this area, I highly recommend the Collaboration Explained class (see previous posts on this class).  Facilitation is very different from controlling meetings.   

Highly available to the team.  If your SM has a full time role elsewhere in the company and just runs in for the daily standup, they are not going to be very helpful to the team.  They need to manage impediments and be available as needed to the team to work through any roadblocks.  The SM should always know the status of the team and how things are going. 

Be quiet.  This is a tough one.  Often times, skills that make a person a good SM are the same skills that make it hard for them to keep their mouth shut.  I have seen several SMs that can’t help themselves and start telling team members what tasks to do in the daily standup or start questioning the team’s estimates during estimation sessions.  As a SM, you need to know when to keep your mouth shut and when to step in.  You are not a traditional project manager and you should not operate in a command and control style. You are a “servant leader.” 

All About the Team.  As a SM, you are always focused on the team, not on yourself.  You need to thrive in watching the team succeed and do whatever it takes to help them do so.  If you need a lot of personal recognition and praise, then a SM may not be the best job for you.  

Book Review: Test Driven .Net Development with FitNesse

I just finished Gojko Adzic’s book called Test Driven .Net Development with FitNesse.  I have used FitNesse on .Net projects and I still learned a ton from this book.  I highly recommend it for anyone working with FitNesse on a .Net project, even if you are already comfortable navigating the waters of FitNesse.  If you have heard of FitNesse and are interesting in learning more about it and playing with it, this is the perfect place to start.  The book is geared more towards developers but is still an excellent read for analysts and testers as well.  You can download the source code behind the examples and follow along very easily.

Thank you, Gojko, for writing this much needed book!

Notes from a Recovering Crackberry Addict

I was out to lunch the other day with some friends I hadn’t seen in a long time.  I was really looking forward to catching up and seeing them.  However, by the time the check came, I was half irritated with them both and ready to leave.  Why?  Because the entire lunch was spent with them half listening and checking their Crackberry’s every 2 minutes.   

In the spirit of full disclosure, I must first admit that I WAS one of those people about a year ago.  My Blackberry was strapped to my side 24/7.  It was the last thing I looked at before I went to sleep and the first thing I looked at when I woke up in the morning.  Every time it vibrated to let me know an email came in, I stopped what I was doing to check it out.  I just couldn’t help myself.  I was addicted.  Surely every email that came through (and there was well over 100 a day), needed my immediate attention or something awful was going to happen.  I just knew it. 

When I changed jobs last year, my Blackberry was left behind.  I almost bought a new one but decided on a simple cell phone instead with no text messaging package included.  My last day at work, I handed over my Blackberry and immediately felt 20 pounds lighter.  I was free.   

However, later that night, I swear I began to get nervous and jumpy.  I was out with friends and they all had Blackberry’s they were diligently checking every few minutes and all I had was my little cell phone that only made and received calls.  I started to panic. What if someone had sent me an email?  What if they all knew something I didn’t?  How would I know what was going on?   Maybe I needed to go and trade in my cell phone for a Blackberry or Trio the next day. 

I decided to see if I could stay Crackberry free for 1 month.  If the world fell apart during that time because of an email I didn’t answer immediately, then, I would go back and get fully connected again. As the weeks went by, I slowly started to calm down.  I was shocked to find that the world (and my job) managed to do just fine without me getting emails immediately all day long.  If there truly was an emergency at work, my co-workers knew where to find me or they could actually pick up the phone and call me if needed.  The funny thing is, they never did.  My new cell phone remained quiet. 

So, I am now the worst thing possible…I am a reformed Blackberry user (kinda like a smoker that finally quit).  I get irritated when I see people madly scrolling with their thumb and typing furiously with both thumbs.  (I will say that whoever invented the Blackberry is clearly not a woman…you try typing on that with long nails…it takes some talent!)   I challenge all you Blackberry users out there to actually turn it off for a few hours one day and see what happens.  Or if you are having a nice meal out with friends or family, leave it in the car and really participate in the analog conversation (nothing digital for an entire meal!!).   I can almost guarantee that nothing catastrophic will happen. 

If you are an employer that hands out Blackberry’s to your staff, reconsider it.  What is wrong with the old fashioned pager?  I used to carry one of those years ago for work emergencies.  And, I can honestly say, it only went off in emergencies.  People will think twice before paging you at 9pm at home but there is no reason not to send an email to someone at all hours of the day.  And, if you are like most Blackberry users, you will feel obliged to stop what you are doing and check that email no matter what time of day it is. 

Now, I realize that there are some of you out there that can carry a Blackberry and are not addicted to it.  You can actually use it in moderation.  Well, good for you.  I was not one those people (and neither are most of my friends that carry them).  So, I had to make a complete break and go cold turkey.  And, it was the best thing that ever happened to me.  My husband also agrees J 

Who gets to decide when to ship software?

I recently got in a lengthy debate with some folks over this very topic.  As I quickly found out, most folks have a very strong opinion on this.  And here I thought I was the only oneJ.  It seems as though most folks fall into 3 different camps: 

Camp #1 – The Testers decide 

This “camp” was formed of folks on waterfall and agile teams (which I found interesting).  They felt that the team that tests the software and was most familiar with the quality and test coverage should get the final say on whether or not to ship it. 

Camp #2 – The Project Manager decides 

This “camp” applied to people on more traditional waterfall projects.  They felt that the project manager knew all of the data points with the project and based on that data should be responsible for determining when to release software.  However, this project managers did not have ROI responsibility. 

Camp #3 – The Product Manager (aka: ”the business”) decides 

This “camp” applied to people that were staffed on more agile projects.  On these teams that had a strong product owner presence, they felt that since the product owner was ultimately responsible for the ROI, then they should decide what and when to ship. 

Camp 1 had the most votes followed by camp 2 and then camp 3 had the least amount.  In some ways this surprised me but in other ways, it was what I was expecting. 

I will admit that I used to be a member of Camp #1.  As a matter of fact, I believe at one point I might have actually been the camp founder and directorJ  However, over time, I have moved to Camp #3.  Let me tell you why. 

Testers typically do have the most insight into the quality of the software.  We are intimately familiar with what is working, what isn’t, and what the high risk areas of the application are.  We can tell you the defect trends, defect turn around time, severity break downs, etc.  Because of this, it is easy for us to feel like we should be the decision maker as to whether or not we should put this code into our customer’s hands.  However, often times, there are several things we do NOT know that also play in to the decision of whether to ship code or not.  Perhaps there is a large business deal that will fall through if we don’t ship.  Perhaps the customer (or customers) wanted the new code so badly that they were willing to take the risk of using it with the list of known issues.  Perhaps the business wants it shipped for demo purposes but isn’t planning on having customers use it until it is cleaner.  Perhaps the business is planning a Beta / trial period.  The list can go on and on.  You get my point.   

Here is what I think about the testers’ role in deciding when to ship software.  I think the tester is responsible for delivering all of the metrics around the quality of the code to the business.  Deliver the facts, not your emotion.  Include test coverage, defect count, defect find rate, etc.  Make sure the business has a solid grasp on the current state of the code and any risks associated with shipping this code.  Then, do the hardest thing possible… that the business team will make the right decision based on those facts and any other data points they may have.

Then, I also firmly believe that the business should CLEARLY COMMUNICATE with the delivery team why they made the decision to ship (or not ship) the software.  This is particularly important if they decide to ship software that the testers don’t feel is ready to ship.  The business should let the team know that they understood the risks but made the decision to ship based on the following facts (at this point, they should clearly list out the reasons why they are shipping).

 If you think a bad decision was made and don’t know why, then by all means ask for an explanation!