AMA: the test analyst engineer divide

sunjeet asks…

Hi Alister, A mentoring/coaching question…. Keen to know your thoughts on the test “analyst” vs “engineer” trend in the world of software testing. Context –> Having been a practitioner of both “sub disciplines” , I feel that exploratory testing and automated testing require some specific set of skills and mind-set , however deliberately setting this demarcation/partitions, I believe, pigeon-holes testers (especially new graduates ) into identifying themselves as a “manual” or “automated” tester and restricting their development and learning. I believe there are set of overlapping base skills which every tester should have to compliment their core exploratory testing and automated testing roles. So, my questions are – 1. Do you agree or disagree with this dichotomy , and why ? 2. How do you get new testers to grow their exploratory and test engineering skills ? Thanks in advance

My response…

Question 1: Do you agree or disagree with this dichotomy , and why ?

Yes, I agree with you that we shouldn’t divide testers into those identifying themselves as “automated” or “manual”.

I believe that one of the main reasons this divide came about was lots of software and systems have been built without writing self-testing code, particularly software that was developed using the waterfall model.

The waterfall model created the need to have software testing performed external to the programmer/development team, during a separate “phase”.

Originally this started out as a large manual testing effort; writing hundreds of detailed test cases and executing these not only against newly introduced functionality, but also using these for manual regression testing purposes.

But manual regression testing quickly not only becomes a chore for all involved (who wants to run the same tests over and over again?!?), but it is also a limiter to how often we can release change since there’s so many manual regression tests to execute each time.

To solve the regression testing problem a new role came about which was for testers who could automate these manual regression tests; to use a terrible term ‘automated testers’. This, I believe, began the divide between test analysts, who write and sometimes execute test cases, and test engineers, who take these manual test cases and automate them.

Thankfully the agile software development model had more of an emphasis on developers writing self-testing code. But the quality of a programmer’s code is only as good as the quality of their automated tests, so ‘agile testers’ in the same team perform exploratory testing on newly developed functionality to ensure both the user functionality and the automated tests are acceptable for the product.

In this situation it doesn’t make sense to have test analysts who can only do manual testing or test engineers only do automated testing: they need to be able to do both.

Manual testing becomes human exploratory testing on new functionality, covering end-to-end user journeys, and making sure we cover different real world browsers and devices.

Automated testing becomes collaborating with developers on making sure we have enough coverage and making sure we have just enough end-to-end automated test coverage of essential user flows.

I’m not sure what you call this type of tester: they’re not only a test engineer and they’re not only a test analyst; they’re both. Maybe Excellence Wrangler 😉

I also believe some organizations have gone too far in that they will only employ ‘test engineers’ or people who they expect to exclusively automate every single bit of testing. There’s always a need for human exploratory testing, so I think it’s a mistake to not account for this.

Question 2: How do you get new testers to grow their exploratory and test engineering skills?

In my recent answer on junior QA professional development, I wrote about developing both human exploratory and automated testing skills, so please check it out.



Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.