Manual Testers vs. Automation Engineers – Why the Divide?

I just got back from the stp conference in San Francisco (www.stpcon.com).  After attending several talks and speaking one on one with folks there, I started to notice a recurring undercurrent.  There seems to be a divide in the testing community between manual testing and automated testing.  I am noticing more and more that papers, blogs, etc that I read and any talks I attend have either one focus or the other but never combine the two.  For example, the automated community rarely mentions manual testing or the need for it.  Conversely, the manual testing community (especially in the exploratory testing world) seems to really down-play the need and/or importance of automated testing.  I think this is in turn perpetuating a divide in skill sets of testers as well.  Teams have manual testers and then separate automation engineers instead of just having test staff that do both.

 

In my experience, the real sweet-spot in testing is when you have a combination of both manual and automated tests.  They both serve a real purpose and they both provide a lot of value.  I would never want to be on a project that was either 100% manual testing or 100% automated testing.  The projects I have been on that have been the most successful and produced the highest quality results were projects that used a combination of testing strategies.

 

Automated testing allows you to get more testing done.  When you have a large portion of your test scripts automated, that frees up time for testers to do exploratory testing that can’t be put under automation.  If none of your tests are automated, then you will likely spend a good portion of your testing time validating happy path, “good” user scenarios and will never get to spend time really digging into the product to test the more unusual scenarios that tend to have hidden defects in them.

 

The strongest testers I have worked with are ones that can do both.  They are technical enough to get high functioning, easily maintainable automation scripts in place but also spend time using their “soft” skills to manually test as well.

 

Here is why I think we are seeing this divide in our community: 

 

  1. Most teams separate their manual testers from their automated testers.  I think this is a recipe for disaster.  When you have a team that only does automated testing, their scripts are often given to them by the manual testers.  Often times, the automation engineers code exactly what they are told to do without any real in-depth knowledge of the product.  Then, because the manual testers don’t really understand the automation or how it works, (and because testers tend to struggle with severe trust issues), the manual testers often spend a large amount of time manually testing the same functions that are automated….just so they can be sure it really works and that the automation didn’t miss anything.  What a waste of time!!!
  2. The manual testing community doesn’t want to learn how to automate.  The idea of learning that technology is scary.
  3. The automation community thinks manual testing is boring and doesn’t want to do it.  A lot of the strong automation testers I have worked with and met come from development backgrounds and really just want to write code.  They have no desire to really “play” with the system to see what they can find.

 

I think manual only testers have a chip on their shoulders with automation because they don’t know how to do it and are scared of it.  Automation engineers tend to make more money then manual testers because of the required technical skill set – they are treated almost like developers in terms of rank and pay.  Often times, they act like they are more valuable as well.  Not good.  The skills that are required to be a good manual tester are hard to measure so they are often not considered as valuable as developer skills.  This isn’t fair, either.  I think it is harder to train someone how to think like a good tester then it is to train them how to create an automation script. 

 

When testers are able to play in both spaces….do automation AND manual testing, then they are able to build better automation scripts and spend their manual testing time focusing on those high risk, interesting parts of the application that lend themselves to interesting what-if scenarios.  They won’t waste time manually testing what is under automation because they can trust the automation is doing what it was built to do.  However, finding these people is hard (at least it has been for me!). 

 

I hope that in the future we can see more folks in the testing community talking about how manual and automated testing can work together and highlight the strengths and limitations of both.  One is not better then the other – they each serve a specific purpose and each provides significant value.  I believe that every project needs and should have manual and automated tests going on all the time.

 

Advertisements

7 Responses

  1. I think what the explorers downplay is the slightly-automated GUI regression testing that vendors pass off as test automation.

    If you look at testing in terms of the full spectrum of tasks involved in imagining, designing, implementing, debugging, documenting, executing, interpreting and results-reporting a test, any of these tasks can be automated. GUI regression testing automates the execution and a simple level of results-evaluation. It is also typically subject to a significant maintenance cost when the software changes. For those of us who want to support change, rather than change-control it, a high-maintenance test suite adds to the project’s inertia rather than its velocity.

    The other problem with GUI regression testing is that it automates traditional scripted tests. Rerunning the same test over and over and over and over and over gives very little useful information. You CAN design test automation schemes that introduce much more variation, rather than snoozing over the same old scripts, but that kind of work is beyond the technical skills of many people who learned how to use the simple scripting tools.

    I think the divide is not between high technical skill and low but between (a) valuation of scripted regression tests at the GUI (not unit) level, (b) willingness to consider technical approaches for tasks like computer-assisted test design, computer-assisted test implementation, computer-assisted result evaluation, computer-assisted pattern analysis of results, rather than keeping our heads locked in the vise of computer-assisted test execution.

    You won’t see me talking about how to reconcile GUI test automation with exploratory testing because I believe that most regression tests worth automating at the GUI level are better automated at the unit or glass-box-integration (e.g. FIT) level and most of the interesting questions that we can address with brains or technology are not well targeted by the GUI regression tools. That’s not because of a fear of technology. It’s because of a fear of wasting my time on low-value technology.

    — Cem Kaner

  2. Cem-
    Thanks for your comments. I agree with you on your views on typical GUI regression tests. Most teams I have worked with that have an “automated regression suite” in place are getting zero ROI on it for several of the reasons you mentioned. If I could pick one “level” of testing to automate it would be unit testing every time, then it would be tests at the business layer using something like Fit.
    Megan

  3. Hello webmaster
    I would like to share with you a link to your site
    write me here preonrelt@mail.ru

  4. Cem,

    Thank you for bring this up. I have been having the question about how to bring manual and automation together for sometime.

    In the structure where we have people do both manual and automation testing, automation efforts failed because automation was on when testers have free time. We always have some unscheduled tasks for manual testing. The end results are: 1) Automation scripts are useless when we really need them because lacking of time to maintain. 2) The number of automation scripts increase very slow. 3) Automation scripts have maintenance problems because they are created by multiple people without any architect approach. I understand that all those problems fit in the most common reasons for failed automation efforts.

    On the other side of the story, I have been leading an automation team for an automation project. Most people in my team know very little about the product. We were following manual steps. Results that I observed: 1) The Automation scripts increase much faster. 2) Automation scripts are used in the regression testing cycles. 3) Manual test steps are unclear or not align with automation efforts which reduces the speed of automation.The speed of automation can go up if we do not follow the exact manual steps.

    I guess we are getting back of old questions. Should we use testers who have less programming experience in general do both manual and automation testing? Should we divide the teams into manual and automation?

    I have done both manual and automation testing. I love them both and agree that there should not be any differences between good manual testers and good automation testers.

    Observed from both side, I think there should be at least one person in the qa team understand both from technical side. This person should have experiences in both manual and automation testing to try to find a way to bring manual and automation together to increase productivity in a certain environment especially in a relatively small agile company.

    Good testers as you mentioned are difficult to find.

    I will continue my experiments to try to bring manual and automation more closely. But each environment is different. It is difficult to find one solution that fits for all environments. I believe the most important thing is to find a solution for your environment.

    Lynn

  5. While considering myself an Automation Professional I always state that no one can pretend being Automation Specialist without strong knowledge of QA methodologies and practices. Simply because QA is an end-user of Test Automation, and no one can satisfy the needs of user without knowing them very well. Another point – no one stands behing an Automation Developer, so the code produced must be high-quality and self-monitoring, and the automation guy _must_ verify code produced – from unit testing to regression testing!

  6. This is the only way Testing process can be benifited.

  7. Very nice, i like the way you explained. I also wrote something on similar lines on Why security testing required for Software and Apps. Hope you would like it – http://bit.ly/1jYfsLu

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: