AMA: What sets exceptional QA testers apart?

Dayana asks…

I wondered if you could tell me what sets exceptional QA testers apart? Not just personality or work ethic traits, but specific skills and programming knowledge that will be very valuable to a team?

My response…

I think exceptional QA testers, as explained recently, aren’t people who are exceptional at just one thing, eg. testing, but good at lots of things.

So an exceptional QA tester, in my opinion, will typically have (at least good) skills in the following things:

  1. Skills in human exploratory testing: an exceptional QA tester has the ability to effectively find the most important bugs fast. Whilst this skill can be developed, I have found it’s mostly a mindset.
  2. Skills in developing automated tests: an exceptional QA tester will have programming skills needed to develop automated tests and I would recommend these to typically match the programming language(s) that programmers in your organization use. For example, skills in automated testing in .NET if your company primarily uses Microsoft .NET. Although, someone with strong programming skills in one language (eg. ruby) should be able to transfer these skills to another language (eg. python).
  3. Knowledge/Experience in your business domain: an exceptional QA tester will fully understand your business domain and keep this context in mind whilst testing a product and raising issues. An exceptional tester is always testing your system – just as I am testing WordPress.com publishing this post.
  4. An empathetic mindset: we design and develop software for real people and real life. An exceptional QA tester will test with this in mind.

AMA: the future of QA roles

Lroy asks…

How, according to you, is the future of QA roles going to look like. I currently work as a QA in a matured agile team where the devs are responsible enough to practice TDD and write automated tests while I pair with them. I pair with them on test strategy, performance testing, a bit of security testing. It is definitely an interesting shift to see how QA was perceived to how it is now. I understand that it is not like this in every company. But how do you see this role pan to to be? Thanks PS: Thank you for taking time to do AMA, it is really interesting to read your responses. I have enjoyed reading your blog for couple of years now.

Continue reading “AMA: the future of QA roles”

AMA: the test analyst engineer divide

sunjeet asks…

Hi Alister, A mentoring/coaching question…. Keen to know your thoughts on the test “analyst” vs “engineer” trend in the world of software testing. Context –> Having been a practitioner of both “sub disciplines” , I feel that exploratory testing and automated testing require some specific set of skills and mind-set , however deliberately setting this demarcation/partitions, I believe, pigeon-holes testers (especially new graduates ) into identifying themselves as a “manual” or “automated” tester and restricting their development and learning. I believe there are set of overlapping base skills which every tester should have to compliment their core exploratory testing and automated testing roles. So, my questions are – 1. Do you agree or disagree with this dichotomy , and why ? 2. How do you get new testers to grow their exploratory and test engineering skills ? Thanks in advance

Continue reading “AMA: the test analyst engineer divide”

AMA: balance between testing activities

QA asks…

At my work place I can see testers are caught between scripted testing, exploratory testing and automated testing. Some teams have dedicated resources for each type of tests, whereas others do a bit of everything. When it comes to priority automated testing takes a back seat, scripted testing is rushed through (skipping detailed documentation) and exploratory testing aka “just have a play around to see if you can break” takes a driver seat. What according to you is a right balance? How do you structure your test, so you can effectively do proper testing?

My response…

Finding balance is always a challenge. As I explained recently, I have found more and more software developers are more interested in automated testing these days, so collaborating on these automated tests with developers, or moving (some) responsibility of these tests to developers is a great way to free up some more time for human testing.

As I explained in another response, I don’t believe in scripted manual testing for regression purposes, the automated regression tests should cover these manual scripts, so the only testing I do is of the exploratory kind. I still plan my testing, just before or as I test, to work out the kinds of things I want to explore/cover and what browsers/devices or operating systems I will use. The key to good exploratory testing, I have found, is to be testing small changes of functionality, as there’s much less risk of gradually (continuously) introducing new small pieces of functionality than a single large change.

I don’t think there’s a ‘right’ balance as this will depend on your organization, your resources and how much collaboration you have. I typically spend 40% of my current time writing and maintaining end-to-end automated tests for WordPress.com, and the remaining 60% on human testing: exploratory testing new features, continuous dogfooding on different browsers/devices, catfooding, visually recording flows for reference and historical purposes, triaging existing bugs and raising new enhancement requests. If we had more developers working on our e2e automated tests, which there has been interest in, I imagine my effort could drop to say 20% but I am not really sure until we get there.

 

AMA: are test cases redundant?

Adnan asks…

Hi Alister, I have been reading your blog for quite a while now. I find it quite insightful and engaging.

I wanted to get your opinion on a question (rather observation) I have after doing BDD on few projects as a tester.

If testing is acceptance driven/BDD, then all of the business rules are captured in acceptance criteria in stories and then in feature files when they are automated. Does this makes test cases redundant? Are they actually needed anymore?

In terms of traceability, a feature file clearly maps to a story and hence a business requirement so unless there is a legal requirement to have test cases documented by the client, I feel test cases to be a redundant effort, unless I am missing something. Very interested to hear your experience/thoughts on this.

My response…

My short answer is yes, test cases are redundant.

My long answer is also yes, here’s why:

Ideally I think every software system should aim to have no (zero) manual regression testing, that is you can do a software release and be confident you haven’t introduced any regressions. This isn’t 100% test coverage: this often isn’t practical, zero manual regression is having (just) enough automated tests for you to not have to do any manual regression testing. You still have to manually test new features of course, it’s the existing ones you don’t need to worry about.

When you’re in this situation, there’s no need to have manual test cases, since your automated tests act as living executable specifications of your system, and manual test cases would be redundant.

But what about new functionality? In a continuous delivery model, there’s no time to write manual test cases then execute them, and this is pointless anyway as they won’t live on past that user story which will have automated tests developed alongside it, so I typically test against the acceptance criteria defined for a user story (hopefully in collaboration with you), and add notes to a user story to show what other edge cases I envisioned and what I uncovered during story testing.

I have found having this level of documentation is more than enough even for the audit-heavy environments, as I have found auditors prefer to see automated test results against every build, then a pile of test cases in excel spreadsheets that are out of date.

 

 

AMA: How do you teach someone exploratory testing?

Paul asks…

How do you teach someone exploratory testing?

My response…

For something different, let me start with a quote:

“We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time.”
~ T. S. Eliot

As a father of three children, I believe humans are innate explorers. So exploring a system should come natural to most people, but I’ve found a lot of people can explore a system and not find any bugs.

Techniques like session-based testing attempt to introduce measurement and control to exploratory testing so people are meant to be more effective, but, like the gorilla basketball video has shown us, introducing a goal for a session can blind us to the things that aren’t specifically in that goal. Much like following a script can blind us to things that aren’t in that script.

So how do we teach people to find bugs by exploration?

wilful-blindnessI believe the biggest thing that stops people finding bugs by exploration is wilful blindness: choosing not to know. The way you can teach someone to be a better exploratory tester therefore is by teaching them to be less blind.

Margaret Heffernan explains this superbly well in her completely non-testing non-technical book that I think every tester should read:

“We make ourselves powerless when we choose not to know. But we give ourselves hope when we insist on looking. The very fact that wilful blindness is willed, that it is a product of a rich mix of experience, knowledge, thinking, neutrons and neuroses, is what gives us the capacity to change it. Like Lear, we can learn to see better, not just because our brain changes but because we do. As all wisdom does, seeing starts with simple questions: what could I know, should I know, that I don’t know? Just what am I missing here?”

I really recommend reading that book.

Visualising software quality: using ink

Not so recently, Gojko Atzic wrote a blog post asking for reader’s suggestions on techniques to visualise software quality of a system in development. I’ve recently been giving this some thought and came up with the following idea.

A story

Early last year at a client site, I met a genuinely lovely person working as a tester. She did traditional manual testing of a large complex system being developed in a non-iterative (big bang) manner.

I noticed her clear red pen had a small label on it: a piece of paper with a date sticky taped on, so I asked her about what it meant. She told me how she hates waste and loves to use a pen in its entirety, and the date is a way to keep track of how long she’s been using the particular pen for.

We went on talking to understand what she used the red pen for. What she’d do was create lots of manual test cases in a template on her computer, and then print them all to create a large pile of paper when it came time to execute the tests. As she excuted these tests, her red pen would be used to mark failures on these test case printouts, and write notes about what the defects were for the failures. As the pen was clear, and you could see how much red ink remained, she then joked about how the pen was an indicator of how good the system she worked on was. She’d used lots of red ink from her red pen since the start of the year, so the system wasn’t good! Aha!

An Idea

I started reading some suggestions for visualising software quality. I see two problems with most of them: firstly, most are far too complex, and secondly, most rely on capturing detailed metrics which creates overhead onto itself.

What if you could have a lo-fidelity way to visualise software quality without creating any overhead? Perfect. Enter red and green ink.

A proposal for visualising software quality using red and green ink

Let me start by saying that this idea is freshly baked, possibly half cooked: I haven’t even tried it and I don’t know if it’ll work at all. But I think it’s cool and that’s why I am sharing it.

Imagine you’re working in a small cross-functional team developing a piece of software. You work as the tester on the team and have varied responsibilities: work with the business analyst and SME to define acceptance criteria, work with a developer to automate these acceptance criteria, and conduct exploratory (session-based) testing on individual user stories as they are completed.

At the start of the project, you’ll need three additional things:

  • Two brand new matching red and green pens with clear barrels (so you can see the ink)
  • A ream of blank white paper: roughly A4 or A3 sized (or whatever you can get your hands on)

Now you’re ready to visualise software quality

Each story has a set amount of time allocated to it for exploratory (session-based) testing. When you are about the start an exploratory testing session, you need to grab the two the pens and a couple of blank white sheets of paper. As you test, write your thoughts on the paper in either ink: good thoughts (me likey) are in green, bad thoughts (bugs, crashes, poor design etc.) are in red.

Instant feedback on software quality

As soon as the session is  complete, stick these sheets of paper on your wall, and talk to the team about them, explaining each red and green thought. The paper will instantly show what you think of the quality of the system: a predominantly green sheet is good, a predominantly red one is bad.

Longer term feedback on software quality

Over time, the ink remaining in each pen will paint a picture (excuse the pun) about the quality of your system. Are you using loads of red ink and not much green?

Thoughts?

As I mentioned, this is just an idea I recently had and I have no idea whether it’d be successful in visualising software quality. But I reckon it’d be fun.