Manual Testers vs. Automation Engineers – Why the Divide?

I just got back from the stp conference in San Francisco (www.stpcon.com).  After attending several talks and speaking one on one with folks there, I started to notice a recurring undercurrent.  There seems to be a divide in the testing community between manual testing and automated testing.  I am noticing more and more that papers, blogs, etc that I read and any talks I attend have either one focus or the other but never combine the two.  For example, the automated community rarely mentions manual testing or the need for it.  Conversely, the manual testing community (especially in the exploratory testing world) seems to really down-play the need and/or importance of automated testing.  I think this is in turn perpetuating a divide in skill sets of testers as well.  Teams have manual testers and then separate automation engineers instead of just having test staff that do both.

 

In my experience, the real sweet-spot in testing is when you have a combination of both manual and automated tests.  They both serve a real purpose and they both provide a lot of value.  I would never want to be on a project that was either 100% manual testing or 100% automated testing.  The projects I have been on that have been the most successful and produced the highest quality results were projects that used a combination of testing strategies.

 

Automated testing allows you to get more testing done.  When you have a large portion of your test scripts automated, that frees up time for testers to do exploratory testing that can’t be put under automation.  If none of your tests are automated, then you will likely spend a good portion of your testing time validating happy path, “good” user scenarios and will never get to spend time really digging into the product to test the more unusual scenarios that tend to have hidden defects in them.

 

The strongest testers I have worked with are ones that can do both.  They are technical enough to get high functioning, easily maintainable automation scripts in place but also spend time using their “soft” skills to manually test as well.

 

Here is why I think we are seeing this divide in our community: 

 

  1. Most teams separate their manual testers from their automated testers.  I think this is a recipe for disaster.  When you have a team that only does automated testing, their scripts are often given to them by the manual testers.  Often times, the automation engineers code exactly what they are told to do without any real in-depth knowledge of the product.  Then, because the manual testers don’t really understand the automation or how it works, (and because testers tend to struggle with severe trust issues), the manual testers often spend a large amount of time manually testing the same functions that are automated….just so they can be sure it really works and that the automation didn’t miss anything.  What a waste of time!!!
  2. The manual testing community doesn’t want to learn how to automate.  The idea of learning that technology is scary.
  3. The automation community thinks manual testing is boring and doesn’t want to do it.  A lot of the strong automation testers I have worked with and met come from development backgrounds and really just want to write code.  They have no desire to really “play” with the system to see what they can find.

 

I think manual only testers have a chip on their shoulders with automation because they don’t know how to do it and are scared of it.  Automation engineers tend to make more money then manual testers because of the required technical skill set – they are treated almost like developers in terms of rank and pay.  Often times, they act like they are more valuable as well.  Not good.  The skills that are required to be a good manual tester are hard to measure so they are often not considered as valuable as developer skills.  This isn’t fair, either.  I think it is harder to train someone how to think like a good tester then it is to train them how to create an automation script. 

 

When testers are able to play in both spaces….do automation AND manual testing, then they are able to build better automation scripts and spend their manual testing time focusing on those high risk, interesting parts of the application that lend themselves to interesting what-if scenarios.  They won’t waste time manually testing what is under automation because they can trust the automation is doing what it was built to do.  However, finding these people is hard (at least it has been for me!). 

 

I hope that in the future we can see more folks in the testing community talking about how manual and automated testing can work together and highlight the strengths and limitations of both.  One is not better then the other – they each serve a specific purpose and each provides significant value.  I believe that every project needs and should have manual and automated tests going on all the time.

 

STP Conference Day 2

The morning started off with a Keynote from Robert Sabourin called “What is so Different about Testing in Scrum.”  I wouldn’t say the title of the talk really fit the talk itself but it was a good kick-off for the day.  If you have seen him speak before, then you know he is high energy and enthusiastic. 

His talk essentially gave a brief overview of Scrum and then walked through some case studies of teams he has worked with that were adopting Scrum and what some of their challenges and proposed solutions were.  It wasn’t surprising to hear that the challenges were centered on either lack of product ownership and/or testing team challenges.  I was hoping he would actually dive into some of the key differences in testing in Scrum like the title of the talk mentioned but he really didn’t go there.  No one seems to go there…again, another topic for another time. 

 

I went to 4 different talks.  Two of these talks were topics around agile testing.  I was disappointed by both of these talks in that they really didn’t get into any meat on the topics.  It was all high level principles with no actual techniques or take-aways on what to do and how to do it. 

 

One talk was on metrics.  The speaker was good and had good content for the most part.  However, I would argue that several of the metrics that were discussed really don’t provide any value to an organization.  For example, she mentioned tracking requirements stability as a metric.  Let’s say you do this and you discover that 40% of your requirements change.  So what.  What is that metric going to do for your team or organization?  Requirements always change.  Period.  We all know that…we should all be prepared for it.  Spending a significant amount of time (and therefore money) tracking how many requirements change doesn’t really help you to handle those changing requirements.  Why not invest that time and money into something that will help your team work effectively in the changing requirement world?

 

I ended the day with a talk by Matt Heusser called “Software Testing from the Ground Up.”  The published excerpt on the talk didn’t seem to match the talk itself.  I did enjoy some of the comparisons he made between testers and other professions, though. 

STP Conference – Day 1 review

I am currently in San Francisco at the STP Conference (www.stpcon.com).  I am really starting to clearly see what I consider a divide in the testing world between automated and manual testing that I believe is negatively affecting all development methodologies.  I am currently collecting my thoughts on this for an upcoming blog post….stay tuned J

 

Yesterday was the first day of the conference and was a full-day tutorial.  I spent the day at Michael Bolton’s “Rapid Intro to Rapid Software Development” class.  This is usually a 3-day class that was condensed into one day.  Because of that, the class was a lot of lecture and not as much hands-on as I was hoping for.  That said, I have condensed 2 day classes into 4 hour “overviews” so I know the challenges with that.  I think I would really enjoy the 3-day class much more.

 

Overall, I enjoyed the class and picked up some great tips/tricks/ideas particularly around exploratory testing.  I have attended a few talks by James and John Bach in the past and have read many of their articles, etc on exploratory testing.  This workshop really drove home how this technique works and the value it adds.

Some of my key take-aways from this class were:

 

  1. As a tester, you should report what you see, not what you infer.  Basically, the idea here is that we should report just the facts and not infer what the problem is.  By doing this, we are able to keep our credibility with the rest of the team.  If you document an issue and infer what you think the problem is and that isn’t the actual problem, then the developers may start to think of you like the boy that cried wolf.  Just reporting the facts avoids this issue.
  2. Manual tests should discover things – not prove things.  I love this – especially in the Scrum/Agile context.  Agile teams focus heavily on test automation (as they should).  However, manual testing is still important and necessary.  Manual testing allows you to “discover things.”  Test automation is where you prove things – you prove that the new code that was checked in didn’t break the existing code – you prove that the system is still functioning as it was before.  However, your automation doesn’t discover new things.  You discover behaviors, unfriendly user interfaces, patterns, etc by manual testing.  Both are important.
  3. Asking the right question is more important then getting the right answer.  Food for thought.  This ties directly into the 3C’s of User Story development – card, conversation, confirmation.  During your conversation, you should be focused on asking the right questions.  These questions will hopefully illicit those implied assumptions, those “what-if” scenarios no one had thought of.  What you learn in asking those “right questions” can then be documented in the confirmation (the acceptance tests).
  4. Any bug that slows down testing is a bad bug.  The example that brought this to light in the class was what most teams would likely call a usability issue.  The application we were testing had an entry field and a button to click to calculate the answer.  As a user, you could not enter the value and click the enter key on your keyboard to calculate the answer.  You had to either mouse or tab to the button on the UI first to click it.  This wasn’t necessarily a “bug.”  I am sure developers, testers, and product folks could argue for hours as to whether or not it was a “bug” or a usability issue.  It would like get ranked a low priority since the system essentially worked.  However, this functionality really slowed down testing.  If we could have just entered data and clicked enter, we could have tested more cases quickly.  Having to tab or mouse for every test slowed down my rate of testing.  Testability of the application you are working on should be considered.  This is why I like the concepts of ATDD and TDD.  If the tests are defined before development starts, then the developers can build the system in such a way that lends itself to easy testing.  As testers, it is absolutely our right to ask for testable applications.

 

More stpcon reviews to come as the conference continues……