Yet another software testing pyramid

A fellow ThoughtWorker James Crisp recently wrote an interesting article about his take on an automated test pyramid.

Some of the terminology he used was interesting, which is what I believe led to some questioning comments and a follow up article by another fellow ThoughtWorker, Dean Cornish, who stated the pyramid “oversimplifies a complex problem of how many tests you need to reach a point of feeling satisfied about your test coverage“.

I believe that one of the most unclear areas of James’s pyramid is the use of the term Acceptance tests, which James equates to roughly 10% of the automated test suite. One commenter stated these should instead be called functional tests, but as James points out, aren’t all tests functional in nature? I would also argue that all tests are about acceptance (to different people), so I would rephrase the term to express what is being tested, which in his case is the GUI.

The other fundamental issue I see with James testing pyramid is that it is missing exploratory/session based testing. The only mention of exploratory testing is when James states ‘if defects come to light from exploratory testing, then discover how they slipped through the testing net’, but I feel this could be better represented on the pyramid. Exploratory, or session based testing, ensures confidence in the automated tests that are being developed and run. Without it, an automated testing strategy is fundamentally flawed. That’s why I include it in my automated testing pyramid as the Eye of Providence (I originally got the ‘eye’ idea from another ThoughtWorker Darren Smith).

Show me the Pyramid

Without further ado, here’s my automated test pyramid. It shows what the automated tests use to test: being the GUI, APIs, Integration Points, Components & Units. I’ve put dotted lines between components, integration points and APIs, as these are similar and it might be a case of testing not all of these.

Another way of looking at this, is looking at the intention of the tests. Manual exploratory tests and automated GUI tests are business facing, in that they strive to answer the question: “are we building the right system?”. Unit, integration and component tests are technology facing, in that they strive to answer the question: “are we building the system right?”. So, another version of the automated testing pyramid could simply plot these two styles of tests on the pyramid, showing that you’ll need more technology facing than business facing automated tests, as the business facing tests are more difficult to maintain.


By removing the term acceptance, and showing what the automated tests test, I believe the first automated test pyramid shows a solid approach to automated testing. Acceptance tests and functional tests can be anywhere in the pyramid, but you should limit your GUI tests, often by increasing your unit test coverage.

The second pyramid is another way to view the intention of the tests, but I believe both resolve most of the issues Dean has with James’s pyramid. Additionally they both include manual session based testing, a key ingredient in an automated test strategy that should be shown on the pyramid so it is not forgotton.

I welcome your feedback.

Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.

11 thoughts on “Yet another software testing pyramid”

  1. My biggest fear with this type of pyramid — however instructive — is that it misrepresents the ‘effort’ required for the different type of tests with the throwaway concept of ‘more tests’ the nearer the base you are.

    While I agree ‘more tests’ can mean a greater number of actual examples, it does not mean more effort is required in the development and implementation of automation. And it does not mean that greater attention should necessarily be directed towards the base of the pyramid.

    In my experience, an hour glass is a better test-effort-shape to represent effort expended on automated testing, regardless of the number of examples. Gui testing is hard to write and maintain, and at the other end, unit tests require effort to extract units from their colloborators and test in isolation. API and integration tests often need less set up and/or specialised code to perform.

    Taking the pyramid as it stands can lead to thin Gui testing as effort is moved toward the base simply to increase the number of tests.

    So the pyramid is useful as a graphic to show the mix of testing a team should aim for, but it is not helpful in showing the time and effort required for each category. It can mislead. And while I have postulated an hour glass as a possible alternative, the effort-shape is probably a function of the type of system under construction.


  2. Hey Alister, thanks for the post. I agree that GUI tests are a better description than acceptance tests (actually started calling them this – see comments section on my post). In some pyramids people have represented the exploratory testing as a cloud above the pyramid, but I like your eye!


  3. Hi Alister,

    Nice informative article , please find below review comments from my side ( I’m QA Automation engineer )

    1. Who owns which part of pyramid ? This has to be finalized. Per me QA should own till Automated GUI Tests and remaining should/can be owned by dev team.
    2. From my exp , I don’t see management is ready ( majority of the time ) to invest in bottom part of pyramid or dev guys don’t have time for Automated unit testing.

    Is bottom portion really needs to be automated at all ? Say I have full fledged regression suite and I could automate 70%-80% of those test cases, I am pretty confident that I can confirm if “Are we building the right System?”

    Frankly I don’t see any benefits doing API/Components/Integration test automation , as we can as well catch bugs in GUI automation level as well.

    The only benefits which I can see is we can certify if code is written in optimized way or not ?



    1. I believe the point of doing integration/component tests is to provide full coverage of the interface/component interface.

      End to end tests through the gui are difficult to design for full coverage and are best used as a collaboration device; to make sure the team has built the right thing in the first place.

      In certain situations, end-end tests are impossible, in any case e.g. Arianne! You must rely on component/integration tests.

      Not doing any integration tests would be like testing a bridge by driving lorrys across it. It might work for the first few, but then what…?


  4. Great Article. The business facing tests are becoming more critical with the switch to agile – the product has to work as designed for the end-user for each release which could happen, in some extreme cases, every week. If managed correctly, the ‘Business Facing Tests’ will expand over time, taking a greater “share” of the pyramid but the constant compression of dev cycles will require teams (both, Dev & QA) to increasingly rely on the Automated GUI tests to certify a given build.

    In our experience inside the Enterprise – the Unit Testing is well understood and almost entirely owned by the Development team after which they “throw it over the fence” to QA. The QA activities focus more on the functional aspect (are we building the right system) and leads to a fair amount of friction in the traditional ‘Waterfall’ process where, invariably, some pieces are left to interpretation.

    As we see more teams migrate towards some flavor of Agile, we see this being less of a problem (part of the reason to make the switch!) but ownership of the testing function becomes the point of discussion, and, at times, contention. Having a GUI automation test bed in place, and increasing that coverage over time, will makes the switch from Waterfall to Agile more palatable.


Comments are closed.