100,000 e2e selenium tests? Sounds like a nightmare!

This story begins with a promo email I received from Sauce Labs…

“Ever wondered how an Enterprise company like Salesforce runs their QA tests? Learn about Salesforce’s inventory of 100,000 Selenium tests, how they run them at scale, and how to architect your test harness for success”

saucelabs email

100,000 end-to-end selenium tests and success in the same sentence? WTF? Sounds like a nightmare to me!

I dug further and got burnt by the molten lava: the slides confirmed my nightmare was indeed real:

Salesforce Selenium Slide

“We test end to end on almost every action.”

Ouch! (and yes, that is an uncredited image from my blog used in the completely wrong context)

But it gets worse. Salesforce have 7500 unique end-to-end WebDriver tests which are run on 10 browsers (IE6, IE7, IE8, IE9, IE10, IE11, Chrome, Firefox, Safari & PhantomJS) on 50,000 client VMs that cost multiple millions of dollars, totaling 1 million browser tests executed per day (which equals 20 selenium tests per day, per machine, or over 1 hour to execute each test).

Salesforce UI Testing Portfolio

My head explodes! (and yes, another uncredited image from this blog used out of context and with my title removed).

But surely that’s only one place right? Not everyone does this?

A few weeks later I watched David Heinemeier Hansson say this:

“We recently had a really bad bug in Basecamp where we actually lost some data for real customers and it was incredibly well tested at the unit level, and all the tests passed, and we still lost data. How the f*#% did this happen? It happened because we were so focused on driving our design from the unit test level we didn’t have any system tests for this particular thing.
…And after that, we sort of thought, wait a minute, all these unit tests are just focusing on these core objects in the system, these individual unit pieces, it doesn’t say anything about whether the whole system works.”

~ David Heinemeier Hansson – Ruby on Rails creator

and read that he had written this:

“…layered on top is currently a set of controller tests, but I’d much rather replace those with even higher level system tests through Capybara or similar. I think that’s the direction we’re heading. Less emphasis on unit tests, because we’re no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests (Which btw do not need to be so slow any more, thanks to advances in parallelization and cloud runner infrastructure).”

~ David Heinemeier Hansson – Ruby on Rails creator

I started to get very worried. David is the creator of Ruby on Rails and very well respected within the ruby community (despite being known to be very provocative and anti-intellectual: the ‘Fox News’ of the ruby world).

But here is dhh telling us to replace lower level tests with higher level ‘system’ (end to end) tests that use something like Capybara to drive a browser because unit tests didn’t find a bug and because it’s now possible to parallelize these ‘slow’ tests? Seriously?

Speed has always seen as the Achille’s heel of end to end tests because everyone knows that fast feedback is good. But parallelization solves this right? We just need 50,000 VMs like Salesforce?

No.

Firstly, parallelization of end to end tests actually introduces its own problems, such as what to do with tests that you can’t run in parallel (for example, ones that change global state of a system such as a system message that appears to all users), and it definitely makes test data management trickier. You’ll be surprised the first time you run an existing suite of sequential e2e tests in parallel, as a lot will fail for unknown reasons.

Secondly, the test feedback to someone who’s made a change still isn’t fast enough to enable confidence in making a change (by the time your app has been deployed and the parallel end-to-end tests have run; the person who made the change has most likely moved onto something else).

But the real problem with end to end tests isn’t actually speed. The real problem with end to end tests is that when end to end tests fail, most of the time you have no idea what went wrong so you spend a lot of time trying to find out why. Was it the server? Was it the deployment? Was it the data? Was it the actual test? Maybe a browser update that broke Selenium? Was the test flaky (non-deterministic or non-hermetic)?

Rachel Laycock and Chirag Doshi from ThoughtWorks explain this really well in their recent post on broken UI tests:

“…unlike unit tests, the functional tests don’t tell you what is broken or where to locate the failure in the code base. They just tell you something is broken. That something could be the test, the browser, or a race condition. There is no way to tell because functional tests, by definition of being end-to-end, test everything.”

So what’s the answer? You have David’s FUD about unit testing not catching a major bug in BaseCamp. On the other hand you need to face the issue of having a large suite of end to end tests will most likely result in you spending all your time investigating test failures instead of delivering new features quickly.

If I had to choose just one, I would definitely choose a comprehensive suite of automated unit tests over a comprehensive suite of end-to-end/system tests any day of the week.

Why? Because it’s much easier to supplement comprehensive unit testing with human exploratory end-to-end system testing (and you should anyway!) than trying to manually verify units function from the higher system level, and it’s much easier to know why a unit test is broken as explained above. And it’s also much easier to add automated end-to-end tests later than trying to retrofit unit tests later (because your code probably won’t be testable and making it testable after-the-fact can introduce bugs).

To answer our question, let’s imagine for a minute that you were responsible for designing and building a new plane. You obviously need to test that your new plane works. You build a plane by creating parts (units), putting these together into components, and then putting all the components together to build the (hopefully) working plane (system).

If you only focused on unit tests, like David mentioned in his Basecamp example, you could be pretty confident that each piece of the plane would be have been tested well and works correctly, but wouldn’t be confident it would fly!

If you only focussed on end to end tests, you’d need to fly the plane to check the individual units and components actually work (which is expensive and slow), and even then, if/when it crashed, you’d need to examine the black-box to hopefully understand which unit or component didn’t work, as we currently do when end-to-end tests fail.

But, obviously we don’t need to choose just one. And that’s exactly what Airbus does when it’s designing and building the new Airbus A350:

As with any new plane, the early design phases were riddled with uncertainty. Would the materials be light enough and strong enough? Would the components perform as Airbus desired? Would parts fit together? Would it fly the way simulations predicted? To produce a working aircraft, Airbus had to systematically eliminate those risks using a process it calls a “testing pyramid.” The fat end of the pyramid represents the beginning, when everything is unknown. By testing materials, then components, then systems, then the aircraft as a whole, ever-greater levels of complexity can be tamed. “The idea is to answer the big questions early and the little questions later,” says Stefan Schaffrath, Airbus’s vice president for media relations.

The answer, which has been the answer all along, is to have a balanced set of automated tests across all levels, with a disciplined approach to having a larger number of smaller specific automated unit/component tests and a smaller number of larger general end-to-end automated tests to ensure all the units and components work together. (My diagram below with attribution)

Automated Testing Pyramid

Having just one level of tests, as shown by the stories above, doesn’t work (but if it did I would rather automated unit tests). Just like having a diet of just chocolate doesn’t work, nor does a diet that deprives you of anything sweet or enjoyable (but if I had to choose I would rather a diet of healthy food only than a diet of just chocolate).

Now if we could just convince Salesforce to be more like Airbus and not fly a complete plane (or 50,000 planes) to test everything every-time they make a change and stop David from continuing on his anti-unit pro-system testing anti-intellectual rampage which will result in more damage to our industry than it’s worth.

My thoughts on tddGate

If you’ve somehow managed to miss the keynote, blog post and subsequent shitstorm about it, David Heinemeier Hansson (dhh), creator of ruby on rails, has recently come out and declared test-driven development (TDD) dead. I’ve dubbed it ‘tddGate‘.

I find it rather ironic that David advocates the importance of clarity of code in his keynote, yet his objections to TDD through his keynote and posts are anything but clear (to me at least).

For example:

  • I don’t fully comprehend his science/pseudoscience/diet analogy in his keynote: he claims TDD is science-based because it uses metrics and coverage, but it’s also like a diet in that most people can’t make it work so it’s pseudoscience, but he also believes information system development isn’t science because it’s actually more like writing French poetry? Very confusing.
  • He interchangeably uses TDD to mean Test Driven Development and Test Driven Design.
  • He seems to imply you can only do TDD if you’re writing unit tests and you can only write unit tests if they are isolated by using dependency injection (DI) and mocks. He also seems fairly negative on unit testing, DI and mocks, therefore negative on TDD, and wants it dead so he can write (slower) system tests without using TDD, mocks or DI.
  • David gives an example of why unit tests aren’t valuable because they didn’t catch a BaseCamp bug to do with attachments (hint: the issue isn’t to do with unit testing per se, but having only one style of tests).
  • Because David thinks TDD is about unit testing, he sees driving system design from units is bad because people don’t care about units, they care about the whole thing, and doesn’t see the importance of testability.
  • Most importantly, he seems to not fully understand TDD (or at least doesn’t communicate his understanding very well):

 “TDD was what I was supposed to do. With TDD I was supposed to write all my tests first and then I would be allowed to write my code. It just didn’t work.” 25:29

The one subject that I wholeheartedly agree with David on is the importance of reading other people’s code. Writers read much more than they write, so should programmers.

So, here’s some of my current thoughts on TDD:

  • I have met few programmers who write unit tests, let alone who practice TDD.
  • Self testing code (eg. automated testing) is critically important to the health of a codebase as it allows someone to confidently make changes and/or perform refactoring without worrying they may have inadvertently broken something.
  • One way to achieve self testing code is via TDD, but it’s by no means the only way. You can easily achieve a self testing codebase by writing tests after code (or even having someone else write tests).
  • There are circumstances where it doesn’t make sense to write tests first (see some examples here).
  • It’s common to practice TDD by writing unit tests but it’s not the only way to practice TDD (for example: you could write an integration test first or an acceptance test first).
  • It’s common to write ‘isolated’ unit tests using DI and test doubles (so they’re fast and decoupled)  but it’s not the only way to write unit tests (you can interact with your database and you can test real dependencies, they’re not isolated unit tests, but are still unit tests nonetheless).
  • I personally find practicing TDD and writing unit tests first does result in a clearer, more well designed API as you’re calling your own API and you can design it how you like, but it isn’t the only way to achieve a clear API.
  • I also find practicing TDD is very effective for bug fixes as it’s easy to write a failing test and have confidence you’ve fixed the problem (and not created any others) when the test finally passes.
  • I don’t trust a test I haven’t seen fail: and this is much easier to do with TDD. You can also achieve this after the fact by (temporarily) changing your code to not work.
  • Unlike David, I strongly believe in the value of testability.
  • I believe it’s important to have the right mix of different types of automated tests for your context. Most often this means more unit tests and less end to end tests, but there are some cases where this is skewed. A diet of just one, like eating only chocolate, or completely banning sweet foods, is unhealthy and unsustainable.
  • Do what works for you personally and in your context. If you love the flow you achieve doing TDD that’s great, if you can get self testing code another way, that’s equally good.
  • If you don’t enjoy it and it doesn’t work for you, don’t make yourself do something like TDD just because someone else says to do it. But don’t stop something like TDD if you like it just because someone else declares it ‘dead’.