The 10 Do’s, and 500* Don’ts of Automated Acceptance Testing

This is a talk I delivered at CukeUp! Australia on Friday 20 November in Sydney, Australia.

10 do's of automated acceptance testing

Automated acceptance testing is hard. I’ve been doing it for over 10 years and it can still be a struggle! There’s at least 500 things you shouldn’t do.

10 do's of automated acceptance testing(1)

There’s never a one-size-fits all solution you see. That’s why we all come to these conferences isn’t it? Because we’re still trying to work things out!

But what are the hardest parts? How can we work together to make things as simple as they can possibly be?

In my experience it’s easy to write all the specs, rules and examples. We can make them comprehensive, and we can think up different scenarios and examples. That’s the fairly easy bit.

But when we convert those straight into automated acceptance tests the fun begins! We end up with a huge number of complicated tests for large, complicated systems, which hinders velocity rather than enhances it.

The reason we have automated tests is to speed things up rather slowing us down. Automated tests, done well, can increase both quality and velocity at the same time. Nothing else can do that.

10 do's of automated acceptance testing(2)

Imagine driving a container ship into work each day. Would it be fast? Would it be possible to park in the car park? Would it be economical? There’s a large chance you’d arrive 3 days late, have caused multiple accidents on the way, and have cost a fortune in fuel. Large, unwieldy tests suites are the same.

But it doesn’t have to be this way. I’ve seen it done right, as right as it can be, but doing it ‘right’ is not that black and white: it depends on your organization.

If testing was a vehicle, what would it look like? What do you need to be driving to to get to where your organization is going each day? Everyone’s will be different and suit their type of journey- we don’t all drive Ferrari’s (or need to). How do we make the mythical vehicle our organization needs? Is safety or versatility or speed the priority for your organization?

10 do's of automated acceptance testing(3)

My time on high performing digital teams has showed me how automated testing can enable us to be the first to market for so many things! With organization such as Domino’s I was involved in lots of firsts-  first online pizza tracker, first GPS driver tracker, first pizza chain with an android watch app, apple watch app and chromecast app. Quick releases and cutting edge technology was only possible because we were confident in our tests- we had our testing safety blanket to push on green..

We released fast and released hard. Our automated test suite which ran completely in parallel in 10-20 minutes, would have required 184 QA staff to execute manually in the same amount of time. There was too much at stake in such a successful company to not have confidence in every release: we had three testers at Domino’s and couldn’t have the required level of confidence without having automated tests.

I’d like to share some of these insights, what our safety blanket was made of, this afternoon.

10 do's of automated acceptance testing(4)

When most people think about automated acceptance tests, they usually just think about any tests that represent user functionality or acceptance of a system, and they put them in one big lump; they don’t differentiate these.

It’s easy to differentiate unit tests with integration tests, not many confuse the two, and most people agree about having a healthy test pyramid.

We need to separate acceptance and end-to-end tests because they do different things, need to be run separately and we need to get the proportions of each right.

10 do's of automated acceptance testing(5)

There’s some key differences between automated acceptance tests and automated end-to-end tests.

An automated acceptance test should be targeted and test one thing. It should test that thing in the most efficient way possible, whether that’s using a testability feature in your app that allows directly accessing your functionality, or an API or end-point.

An automated end-to-end test should cover a realistic user journey through your app, and use the app the same way as a user does. You need to severely limit how many of these you create, as you only need a handful for a med-large app. There’s surprisingly very few common flows in most apps: we develop a lot of functionality our users don’t use!

These end to end tests take the most time to run as they’re “full stack” and typically are the most fragile since they contain so many steps. But they do provide a lot of value as you’ll be confident those critical user journeys in your app are working at any point in time.

An acceptance test for an online tea store may focus around specific shipping policies (eg. free domestic shipping), or specific payment methods offered (eg. paypal, credit cards or POLi)

An end to end test would cover a customer arriving at your site, ordering some tea and paying for it via a popular payment method, all in the one test.

It’s important to differentiate these as we want to limit our end-to-end tests and focus on making our acceptance tests as fast and as targeted as possible.

10 do's of automated acceptance testing(6)

You heard the terms ‘imperative’ and ‘declarative’ yesterday. I tend to avoid these as I find them confusing (Cucumber is imperative at heart) and use ‘intention’ and ‘implementation’ instead.

When you’re writing scenarios it’s easy to put implementation detail in them, but resist the temptation, as this doesn’t lead to business readability and longevity of your specifications.

10 do's of automated acceptance testing(7)

I’ve seen some horrendous cucumber scenarios. As a general rule try to avoid putting specific UI design elements of your app, or technical implementation details into your scenarios.

For example, having steps that involve clicking different values means they are directly tied to the UI, and any slight changes will quickly break your tests. Refactoring plain English scenarios is hard: it’s much easier to refactor the code underneath, for example, using a page object model allows you to model a page and any changes to it are done in code in a single place.

If you put xpath selectors to locate elements directly in your cucumber scenarios, don’t laugh I have seen it done, stop it. Stop it now.

Keeping your features and scenarios about ‘intention’ (what the system should do) rather than “implementation” (how the system does it) makes them so much clearer and easier to understand, plus you can change your test’s implementation (to use an API for example), without any changes to your specifications. An intention focus is less time consuming to develop and has more chance of staying relevant- win!

10 do's of automated acceptance testing(8)

Every test is an island. It’s important that every automated test you write is as independent as possible. Dependencies between tests make them hard to write/debug and impossible to run in parallel.

They are also much harder to run at all as you’ll often need complicated solutions around test ordering and storing test state and data between tests and test runs.

10 do's of automated acceptance testing(9)

Aim to write individual, independent tests, and use testability features to set up any pre-conditions so that you can test what you need to test. In my example here I use a precondition (given) statement that uses an application testability hook to retrieve a recent order and display it. If you had to create an order (using a different test) in order to run this, it makes it much more complicated and harder to maintain. Keep them simple and independent. 😀

10 do's of automated acceptance testing(10)

A lot of people obsess about code coverage and metrics but I think you should obsess about outcomes of your automated testing effort instead. You need to ask yourself why you’re doing automated testing in the first place, and measure if what you’re doing allows you to do that.

It shouldn’t be because it’s cool (because it is), it shouldn’t be because someone else is doing it (because they are), but it should be because you’re trying to do something for your business, like increase your time to market, or increase the quality of your user experience, and you want confidence every time you release.

If you understand this, then you’ll understand that there’s no point in having 100% unit test coverage if you have showstopper bugs every time you release, or it still takes a team of ten testers three weeks to manually regression test your application.

I personally love the aim of zero scripted manual regression testing with zero showstoppers after every release, with as few as automated tests as possible that allows you to reach this outcome. Automated tests are time consuming and costly: why have more than you need to reach your desired outcome.

Once you work this out, it can guide your process: you can implement automated testing for a valid business reason and make sure it is meeting its objective.

Did you find a showstopper after a release? Well just write a new automated test to ensure it doesn’t happen again.

Are you having to manually regression test a new feature? Add new automated tests until you feel confident you don’t need any manual regression testing any more.

There’s nothing good about manual scripted regression testing.

10 do's of automated acceptance testing(11)

But manual testing is amazing!

Automated testing is great for regression testing of existing functionality. But it sucks for story testing, it doesn’t explore your system.

Automated regression tests enable you to spend more time on real testing. Better manual testing, better story testing, better exploratory testing.

Testing a story, exploring your system, spot checking different devices and the ways you can use them, trying different user flows, trying different languages and cultures, looking for weird quirks, all this great stuff that we humans excel at, and machines don’t.

In your automated testing fury: don’t forget good old human manual testing. Our users are human after all (at least for now).

10 do's of automated acceptance testing(12)

In all my time, I’ve never seen highly effective automated tests developed in isolation from the application development team.

If everyone on the team is responsible for quality, then you can make them work. This can be hard sometimes as it’s often culturally ingrained that software development isn’t responsible for quality: but it’s definitely doable and builds momentum once a culture of quality is seeded. It just takes one or two developers to see the light, see the massive benefits that automated testing brings. I like the concept of a quality advocate as a role on a cross-functional team: someone who doesn’t assure quality but simply advocates for it.

A build light, with entertaining/humorous consequences for build breakers, connected to your continuous integration pipeline in a highly prominent point is a great place to start to ensure team ownership.

Your chances of success are magnificently increased if you get everyone on-board and working together.

10 do's of automated acceptance testing(13)

Unless you’re running your tests frequently, they’ll get out of date quickly. Run them every time someone checks in code. If you’ve made an awesome new vehicle and you keep adding enhancements- drive it after you add new things…the engine will keep running smoothly…and you can check if it still drives!

Don’t wait until you’ve added a whole bunch of new things to test if it works.

All tests should belong in the same source control repository as your app as this means they won’t get out of date and there’s no excuse not to work on them. Put it all under the bonnet: this makes it obvious.

By running your tests frequently you’ll keep them up to date with the code-base and be more likely to succeed.

10 do's of automated acceptance testing(14)

As you build up more and more automated tests, they’ll take longer to run. You’ll need to start running them in parallel to speed up the total run time. You can quickly spin up build agents to run tests and collate results on a continuous integration testing server.

It’s best to start doing this from the start, as you’ll need to design your tests to be independent from each other, and not share any data or system state, otherwise you’ll experience conflicts between tests running at the same time.

Running tests in parallel not only is quicker, but it’s more representative of real world use: you wouldn’t have a single user using your app in real life: unless it’s MySpace.

10 do's of automated acceptance testing(15)

At Domino’s Digital we started running all our automated tests against 3 or 4 different browsers for every run. It quickly became  a nightmare!

Different browsers behave slightly differently and we spent a lot of effort trying to solve slight inconsistencies. And they didn’t even find browser specific bugs.

Most browser specific bugs are about layouts or aesthetics, and won’t be found by functional automated tests anyway.

Pick the browser most used by your customers (most likely Chrome) and automate against that.

Reliable automated tests against a single browser trump flaky automated tests against every browser, 86% of the time, every time.

10 do's of automated acceptance testing(16)

Automated acceptance tests cost a lot of money, so you need to get a lot of value back.

Look for any opportunity to extract extra value for little outlay.

What if you could run your same tests against every language your app supports and capture screenshots for the i18n team? Little effort -> max value.

What if you ran your automated end-to-end tests against production after every deployment so you have almost instant confidence your deployment was successful? Little effort -> max value.

At Domino’s we ran all our end to end tests after every staging and production deployment. We needed to make sure they had certain identifiers to automatically cancel the orders created, but this was trivial compared to the benefits they yielded. We were very confident a deployment happened with a green acceptance run.

Always be seeking value from your tests. Run them in dev, test, staging and production.

dont stop being awesome

The title of my talks said I had 500 don’ts but I actually don’t have that many.

Don’t stop trying things. Try the craziest combinations of things you can imagine. You can be pretty sure your users will try it too.

Everything is about experimentation.

Just because I said to do something here this afternoon is no guarantee it’ll work for you: these are my experiences which I hope you can learn from or adapt an idea for you to build your own safety blanket for your testing journey.

You’ll work it out- look at it from angles only humans can.

Just don’t stop being awesome!


Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.

25 thoughts on “The 10 Do’s, and 500* Don’ts of Automated Acceptance Testing”

  1. I shared number 9 with colleagues. Nice to see that in print.

    The rest I think we are doing except 10. I’ll make that next!


  2. This was wonderful! I’m going to be building a test automation practice where I work now early next year and this absolutely will be informing my thinking.


  3. Alister, thanks for the post.
    – agree with your opinion about testing on multiple browsers. tried to run on different browser, but that seems just catching issues of Watir-webdriver, :-)
    – as for DO#2, intent vs. implementation. it is not good to write too much details in steps, but it is not good either if define steps too general as intent. we did that way in the beginning. Later we found it is easier to read/maintain to write steps as actions you interact on screen, but not down to ui element level. that makes it clear what you do testing the workflow, easier to find/locate issues.


    1. Later we found it is easier to read/maintain to write steps as actions you interact on screen, but not down to ui element level.

      If you keep your tests short, I don’t think is a problem. If you have very long tests, I see this becoming a problem.


  4. Nice write up Alister. We have been discussing 1, 7, and 9.

    I don’t think everyone is on the same page yet for #1. Say we want to test the checkout button on the cart page. The end to end test would go through every step necessary to test that. However the acceptance test could use the API and cookie insertion to skip all the steps and take us directly to the cart page with all the needed data. Is this correct? We are using watir-webdriver.
    How would this work with native mobile apps? We are using appium for now but considering native tools.

    Running tests on every commit would be great but since code does not get deployed on every commit we will be doing it on every successful deploy.

    Running tests only on one browser is interesting. This is another debate in our group. There has been a big push to get all our tests running on the top browsers. Seems like a big effort with low returns.

    Liked by 1 person

    1. However the acceptance test could use the API and cookie insertion to skip all the steps and take us directly to the cart page with all the needed data. Is this correct?

      Yes, you build a testability feature into your app so you can hit a URL with some query string parameters and be immediately redirected.

      How would this work with native mobile apps?

      The apps I have worked on have wrapped web views so we tested it using this same style without the actual app. I am not sure how exactly you could do it for a purely native app.

      Running tests only on one browser is interesting. This is another debate in our group. There has been a big push to get all our tests running on the top browsers. Seems like a big effort with low returns.

      As stated, we started this way and quickly ditched it as it wasn’t good value at all.


  5. Great article! Would you explain what is the difference between Acceptance Testing and BDD testing according to you? I see BDD testing more like what you describe as E2E here.

    Liked by 1 person

    1. Hi Marinela
      BDD (Behaviour Driven Development) is more of an approach where people work together to define the behaviour/expectations of a system.
      Automated acceptance tests or automated e2e tests are more of an output of BDD.


  6. There are a lot of ‘Do’s and Don’t’s’ articles on automated testing. Some of them are horrible, but most of them are rather sensible. Yours is definitely in the second category. But still I find that – even though most of your advice are sound – I can only subscribe to approximately half of them as general advice. This holds for yours as well as the others. (And these are probably just the ones where the specifics of my environment and the specifics of yours coincide). Especially, I found your last two ‘do’s’ strangely backwards: Why not test in all the browsers you can? The cost is minimal and half of the time you find a problem, it is browser specific, because the developers only tested in one browser. (This may also be related to the fact that I use TestComplete, which possibly makes cross-browser testing easier.) On the other hand, I would advice restraint in testing in multiple environments. We test in both ‘story’ development branches and ‘master’ development branch, but it has not been cheap to enable this and it is somewhat brittle. Running the tests in development will be even more expensive to set up. (Again this may be related to using a tool, where you are dependent on NameMapping, which may be easier in a lower level tool).


    1. Thanks for your feedback.

      But still I find that – even though most of your advice are sound – I can only subscribe to approximately half of them as general advice

      Of course there are not applicable to everyone, did you see the bit where I said: “Just because I said to do something here this afternoon is no guarantee it’ll work for you: these are my experiences which I hope you can learn from or adapt an idea for you to build your own safety blanket for your testing journey.”

      Why not test in all the browsers you can? The cost is minimal and half of the time you find a problem, it is browser specific, because the developers only tested in one browser.

      I’ve never had an automated functional test find a browser specific bug. Perhaps you work on different apps than I have. I have found browser specific issues by manually cross checking different browsers.
      Selenium works differently across different browsers, perhaps TestComplete is better in this regard.

      Running the tests in development will be even more expensive to set up.

      I have found that unless you can run your tests in a local development environment then you’ll always be writing your tests ‘after the fact’ which means they’ll be more likely to get out of date and break because of changes someone has committed.


      1. Sorry that should be ‘Running the tests in production’ (sorry, what I wrote did not make sense). We have a development ‘master’ branch that each story is merged to when finished. Tests will very often break because of changes in the software. This is an unavoidable fact – which is one of the reasons that you are right that testing needs to be done in the development environment: even automated testing may take time and even though you may think that testing immediately after release is ‘good enough’ (I don’t), the ‘immediately’ will seldom work.


  7. Hi Alister,
    Great article! I like the subtle but important distinction between Acceptance Tests and End-to-End Tests. I like to call End-to-End Tests “demos” as I record them as videos for all to see.

    One thing that you wrote that I’m not sure about:
    (Cucumber is imperative at heart)

    Isn’t Cucumber considered declarative? As in, you’re declaring “what” and not “how”. Putting in the “how” with the implementation details would make it imperative.


    Liked by 1 person

  8. Excellent article!

    I’m especially happy to see you take a stand with number 9. Running automated tests on multiple browsers seems to be an obsession with many people. Those same people do not realise that their test assertions are hopelessly inadequate for picking up most browser compatibility bugs that are likely to occur.

    In addition, browser compatibility problems these days are not what they used to be. At our company, once we stopped supporting IE6, browser compatibility errors went down dramatically. Now that we don’t have to support IE7 or IE8 either, browser compatibility issues have become very rare indeed.


  9. Nice article as always.

    One question though, point 3 says write independent automated test but how do you write independent automated test if you want to write e2e? isn’t end to end dependent on each other to finish the workflow?


    1. I would put the entire workflow in one test.
      For example, in a CRUD (create, read, update, delete) app I would include creating something, modifying and deleting or canceling it in the one test. And have tests for different types of entities.
      I avoid dependencies even in end to end tests.


Comments are closed.