Automated testing quick wins, low hanging fruit, breathing space & oxygen

I’ve seen a lot of automated testing efforts fail, and have also had to personally deal with the repercussions and expectations that have been set by these failed efforts.

For example, I clearly remember the first day on my new job that I moved 1500km to Brisbane for. I was being introduced to the Project Director whose first words to me were:

I have never seen automated testing succeed so I will be watching you very closely!”.

Not the best thing to hear on your first day in a job!

I’ve been thinking a fair amount about why automated testing fails to meet expectations. Sure, there is a lot of sales hype generated by test tool vendors and consultants, not a good thing, and there’s also  practitioners out there without the skills or discipline to deliver successful automated testing solutions, but there must be something else.

The problem is, I believe, that the time and effort to deliver a successful automated testing solution is huge. An automated testing framework might be deemed unsuccessful before it has even been given a chance to be successful! This is why I am a strong believer in first identifying some automated testing quick wins, some low hanging fruit, pardon the idiom.

A quick win is that something that requires a small amount of effort, input, for a huge amount of gain, output. These are sometimes hard to find, but almost always deliver a good outcome: some breathing space.

An example I can use is a simple application monitoring script. A place I worked had a problem with the availability of a public facing web app. Server monitoring wasn’t effective, the web/app server could be running fine but no one could log on via the web! There wasn’t a way to know when it was unavailable to users without first getting complaints via email and phone calls from unhappy people.

It only took me a few hours to quickly develop and test a ruby/watir automated testing script that I set to continuously run to monitor the web app availability. If the web app was unavailable, it would send off an email/sms to the people responsible for getting it running again.

The script was hugely successful. The downtime was reduced drastically, and since people saw patterns about when it was going down, it was easier to determine the cause of the availability problem. Since the script used only free/open source software (ruby and watir), there wasn’t any costs or time taken to acquire the software needed. People were like ‘wow: we didn’t know we could do this so easily’.

I recently attended the Test Automation Workshop on the Gold Coast in Australia and one presentation stuck in my mind. It was by a guy called Don Taylor who used to work in my team in Canberra and his presentation was called “Oxygen for Functional Automated Testing“. He told us that quite a few people emailed him when the program for the workshop went out asking: “What’s this tool called Oxygen?” But it wasn’t a tool at all, but rather about oxygen, the breathing space you need for successful automated testing.

And that’s what I consider to be the biggest output from these quick wins. It’s what automated testing needs to be successful. The breathing space generated from short term automated testing quick wins has enabled me to spend time and effort into creating robust automated testing frameworks that have been designed to be maintainable and successful in the long term. A whole heap of oxygen.

Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.