AMA: gaining confidence without integration tests

Lucy asks…

How do you gain confidence with the testing pyramid mentioned in this post? Unit testing is under the developers and you have only just enough e2e tests.

Continue reading “AMA: gaining confidence without integration tests”

AMA: the eye above my testing pyramid

Guido Cangelosi asks…

Why use an Illuminati semantic in “your” pyramid ? (and please, do not say “by chance”)

Automated Testing Pyramid

My response…

The eye symbol isn’t so much representative of the illuminati as I don’t consider testers or those who perform exploratory testing to be elitist or anything like that, it actually represents the eye of providence, which is an all-seeing eye.

I believe the role of a modern tester is more than just ‘testing’, it’s to advocate quality, that is make sure we have a good product, and that we have good automated tests (so we continue to have a good product).

So testers need to be an ‘all-seeing eye’ on both our product and our tests. It sits up there floating above the automated tests keeping an eye on them.

The diagram is missing some rays of light so perhaps I’ll include that on my next version :)

100,000 e2e selenium tests? Sounds like a nightmare!

This story begins with a promo email I received from Sauce Labs…

“Ever wondered how an Enterprise company like Salesforce runs their QA tests? Learn about Salesforce’s inventory of 100,000 Selenium tests, how they run them at scale, and how to architect your test harness for success”

saucelabs email

100,000 end-to-end selenium tests and success in the same sentence? WTF? Sounds like a nightmare to me!

I dug further and got burnt by the molten lava: the slides confirmed my nightmare was indeed real:

Salesforce Selenium Slide

“We test end to end on almost every action.”

Ouch! (and yes, that is an uncredited image from my blog used in the completely wrong context)

But it gets worse. Salesforce have 7500 unique end-to-end WebDriver tests which are run on 10 browsers (IE6, IE7, IE8, IE9, IE10, IE11, Chrome, Firefox, Safari & PhantomJS) on 50,000 client VMs that cost multiple millions of dollars, totaling 1 million browser tests executed per day (which equals 20 selenium tests per day, per machine, or over 1 hour to execute each test).

Salesforce UI Testing Portfolio

My head explodes! (and yes, another uncredited image from this blog used out of context and with my title removed).

But surely that’s only one place right? Not everyone does this?

A few weeks later I watched David Heinemeier Hansson say this:

“We recently had a really bad bug in Basecamp where we actually lost some data for real customers and it was incredibly well tested at the unit level, and all the tests passed, and we still lost data. How the f*#% did this happen? It happened because we were so focused on driving our design from the unit test level we didn’t have any system tests for this particular thing.
…And after that, we sort of thought, wait a minute, all these unit tests are just focusing on these core objects in the system, these individual unit pieces, it doesn’t say anything about whether the whole system works.”

~ David Heinemeier Hansson – Ruby on Rails creator

and read that he had written this:

“…layered on top is currently a set of controller tests, but I’d much rather replace those with even higher level system tests through Capybara or similar. I think that’s the direction we’re heading. Less emphasis on unit tests, because we’re no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests (Which btw do not need to be so slow any more, thanks to advances in parallelization and cloud runner infrastructure).”

~ David Heinemeier Hansson – Ruby on Rails creator

I started to get very worried. David is the creator of Ruby on Rails and very well respected within the ruby community (despite being known to be very provocative and anti-intellectual: the ‘Fox News’ of the ruby world).

But here is dhh telling us to replace lower level tests with higher level ‘system’ (end to end) tests that use something like Capybara to drive a browser because unit tests didn’t find a bug and because it’s now possible to parallelize these ‘slow’ tests? Seriously?

Speed has always seen as the Achille’s heel of end to end tests because everyone knows that fast feedback is good. But parallelization solves this right? We just need 50,000 VMs like Salesforce?

No.

Firstly, parallelization of end to end tests actually introduces its own problems, such as what to do with tests that you can’t run in parallel (for example, ones that change global state of a system such as a system message that appears to all users), and it definitely makes test data management trickier. You’ll be surprised the first time you run an existing suite of sequential e2e tests in parallel, as a lot will fail for unknown reasons.

Secondly, the test feedback to someone who’s made a change still isn’t fast enough to enable confidence in making a change (by the time your app has been deployed and the parallel end-to-end tests have run; the person who made the change has most likely moved onto something else).

But the real problem with end to end tests isn’t actually speed. The real problem with end to end tests is that when end to end tests fail, most of the time you have no idea what went wrong so you spend a lot of time trying to find out why. Was it the server? Was it the deployment? Was it the data? Was it the actual test? Maybe a browser update that broke Selenium? Was the test flaky (non-deterministic or non-hermetic)?

Rachel Laycock and Chirag Doshi from ThoughtWorks explain this really well in their recent post on broken UI tests:

“…unlike unit tests, the functional tests don’t tell you what is broken or where to locate the failure in the code base. They just tell you something is broken. That something could be the test, the browser, or a race condition. There is no way to tell because functional tests, by definition of being end-to-end, test everything.”

So what’s the answer? You have David’s FUD about unit testing not catching a major bug in BaseCamp. On the other hand you need to face the issue of having a large suite of end to end tests will most likely result in you spending all your time investigating test failures instead of delivering new features quickly.

If I had to choose just one, I would definitely choose a comprehensive suite of automated unit tests over a comprehensive suite of end-to-end/system tests any day of the week.

Why? Because it’s much easier to supplement comprehensive unit testing with human exploratory end-to-end system testing (and you should anyway!) than trying to manually verify units function from the higher system level, and it’s much easier to know why a unit test is broken as explained above. And it’s also much easier to add automated end-to-end tests later than trying to retrofit unit tests later (because your code probably won’t be testable and making it testable after-the-fact can introduce bugs).

To answer our question, let’s imagine for a minute that you were responsible for designing and building a new plane. You obviously need to test that your new plane works. You build a plane by creating parts (units), putting these together into components, and then putting all the components together to build the (hopefully) working plane (system).

If you only focused on unit tests, like David mentioned in his Basecamp example, you could be pretty confident that each piece of the plane would be have been tested well and works correctly, but wouldn’t be confident it would fly!

If you only focussed on end to end tests, you’d need to fly the plane to check the individual units and components actually work (which is expensive and slow), and even then, if/when it crashed, you’d need to examine the black-box to hopefully understand which unit or component didn’t work, as we currently do when end-to-end tests fail.

But, obviously we don’t need to choose just one. And that’s exactly what Airbus does when it’s designing and building the new Airbus A350:

As with any new plane, the early design phases were riddled with uncertainty. Would the materials be light enough and strong enough? Would the components perform as Airbus desired? Would parts fit together? Would it fly the way simulations predicted? To produce a working aircraft, Airbus had to systematically eliminate those risks using a process it calls a “testing pyramid.” The fat end of the pyramid represents the beginning, when everything is unknown. By testing materials, then components, then systems, then the aircraft as a whole, ever-greater levels of complexity can be tamed. “The idea is to answer the big questions early and the little questions later,” says Stefan Schaffrath, Airbus’s vice president for media relations.

The answer, which has been the answer all along, is to have a balanced set of automated tests across all levels, with a disciplined approach to having a larger number of smaller specific automated unit/component tests and a smaller number of larger general end-to-end automated tests to ensure all the units and components work together. (My diagram below with attribution)

Automated Testing Pyramid

Having just one level of tests, as shown by the stories above, doesn’t work (but if it did I would rather automated unit tests). Just like having a diet of just chocolate doesn’t work, nor does a diet that deprives you of anything sweet or enjoyable (but if I had to choose I would rather a diet of healthy food only than a diet of just chocolate).

Now if we could just convince Salesforce to be more like Airbus and not fly a complete plane (or 50,000 planes) to test everything every-time they make a change and stop David from continuing on his anti-unit pro-system testing anti-intellectual rampage which will result in more damage to our industry than it’s worth.

Introducing the software testing ice-cream cone (anti-pattern)

As previously explained, I like using the software testing pyramid as a visual way to represent where you should be focusing your testing effort, and often switch between using a cloud or an Eye of Providence to represent the manual session-based tests at the top of the pyramid that you should use to supplement and test your automated tests.

I often see organizations fall into the trap of creating ‘inverted’ pyramids of software testing, and only yesterday did a colleague point out to me that if you invert my pyramid with the cloud, you end up with an ice-cream cone! So, introducing the software testing ice-cream cone (anti-pattern)!

Yet another software testing pyramid

A fellow ThoughtWorker James Crisp recently wrote an interesting article about his take on an automated test pyramid.

Some of the terminology he used was interesting, which is what I believe led to some questioning comments and a follow up article by another fellow ThoughtWorker, Dean Cornish, who stated the pyramid “oversimplifies a complex problem of how many tests you need to reach a point of feeling satisfied about your test coverage“.

I believe that one of the most unclear areas of James’s pyramid is the use of the term Acceptance tests, which James equates to roughly 10% of the automated test suite. One commenter stated these should instead be called functional tests, but as James points out, aren’t all tests functional in nature? I would also argue that all tests are about acceptance (to different people), so I would rephrase the term to express what is being tested, which in his case is the GUI.

The other fundamental issue I see with James testing pyramid is that it is missing exploratory/session based testing. The only mention of exploratory testing is when James states ‘if defects come to light from exploratory testing, then discover how they slipped through the testing net’, but I feel this could be better represented on the pyramid. Exploratory, or session based testing, ensures confidence in the automated tests that are being developed and run. Without it, an automated testing strategy is fundamentally flawed. That’s why I include it in my automated testing pyramid as the Eye of Providence (I originally got the ‘eye’ idea from another ThoughtWorker Darren Smith).

Show me the Pyramid

Without further ado, here’s my automated test pyramid. It shows what the automated tests use to test: being the GUI, APIs, Integration Points, Components & Units. I’ve put dotted lines between components, integration points and APIs, as these are similar and it might be a case of testing not all of these.

Another way of looking at this, is looking at the intention of the tests. Manual exploratory tests and automated GUI tests are business facing, in that they strive to answer the question: “are we building the right system?”. Unit, integration and component tests are technology facing, in that they strive to answer the question: “are we building the system right?”. So, another version of the automated testing pyramid could simply plot these two styles of tests on the pyramid, showing that you’ll need more technology facing than business facing automated tests, as the business facing tests are more difficult to maintain.

Summary

By removing the term acceptance, and showing what the automated tests test, I believe the first automated test pyramid shows a solid approach to automated testing. Acceptance tests and functional tests can be anywhere in the pyramid, but you should limit your GUI tests, often by increasing your unit test coverage.

The second pyramid is another way to view the intention of the tests, but I believe both resolve most of the issues Dean has with James’s pyramid. Additionally they both include manual session based testing, a key ingredient in an automated test strategy that should be shown on the pyramid so it is not forgotton.

I welcome your feedback.