I only speak at one conference a year and this year that conference will be the first ever Australian Testbash in Sydney on October 19, 2018:
At WordPress.com we constantly deliver changes to our millions of customers – in the past month alone we released our React web client 563 times; over 18 releases every day. We don’t conduct any manual regression testing, and we only employ 5 software testers in a company of ~680 people with ~230 developers across . So, how do we ensure our customers get a consistently great user experience with all this rapid change?
Our automated end-to-end (e2e) tests give us confidence to release small frequent changes constantly to all our different applications and platforms knowing that our customers can continue to use our products to achieve their desires.
Alister will share how these automated tests are written and work, some of the benefits and challenges we’ve seen in implementing these tests, and what we have planned for future iterations.
Takeaways: How to effectively use automated end-to-end testing to ensure a consistent user experience in high frequency delivery environments
I learned who you were by watching your Google automation talk last year in 2015. Your presentations are really nice. Are you planning on giving any other presentations this year or next year?
My short answer is no.
My long answer is also no because I actually don’t actually enjoy giving presentations at all. I wrote about my battles with anxiety last year and whilst I am 90% better than I was, last year I committed to present three talks in less than two months which resulted in me having panic attacks about giving these talks. This wasn’t fair on my wife or children who I need to support on a day-to-day basis.
Each talk requires a huge amount of preparation and since my personality leans towards perfectionism I wanted to make sure each talk was as good as it could be, so I wrote every word of each talk and (unsuccessfully) tried to memorise these. This resulted in me delivering the talks partly reading what I’d prepared, which I wasn’t happy about as I was comparing myself to others who delivered their talks without notes.
The reason people give talks is that speech is an amazingly effective communications tool – probably the most so – yet it’s a drastically inefficient communications tool – each minute of a talk requires at least an hour of preparation. I much prefer written communication as I find confidence in writing, and I hope with frequent, thoughtful updates to my blog I can reach a wide audience and still be effective in spreading new ideas.
Firstly thanks for the compliment, you’re very kind.
When I came up for the topic I was working on a system where we were practicing continuous delivery by frequently doing production releases. As we began releasing more frequently the business expected this and so the reliability of our automated tests became more important. We wouldn’t release on a failed build since we were working on a high volume eCommerce site where a small bug could cause an outage costing a very large amount of revenue. We didn’t have a team of testers to fall back onto for any manual regression testing, so we were 100% dependent on our automated tests.
Even though we were clever about building testability into our system, we still had too many full-stack automated tests which would create non-deterministic results.
I believe everyone looks at the same thing slightly differently as we each have a unique lens that we see our world through and everyone’s lens has varying degrees of difference:
“Each of us tends to think we see things as they are, that we are objective. But this is not the case. We see the world, not as it is, but as we are—or, as we are conditioned to see it. When we open our mouths to describe what we see, we in effect describe ourselves, our perceptions, our paradigms.”
~ Stephen R. Covey
As people who were developing and maintaining tests, we were looking at our non-deterministic tests as the test’s fault. What we didn’t do was look through another lens to see that it could actually be the fault of our system as we had built it instead.
This aha! moment struck me when we released a bad build to production that had passed all automated QA by someone re-running our automated tests a number of times (to get them to pass).
We were blinded by perceived ‘test flakiness’: we refused to believe our problems were something else, so I thought it would be a good topic to present. From the feedback I received both at and after the event, it seems I am very much not alone.
Today’s conference began with some rather funny commentary shared by Yvette Nameth’s mother from yesterday’s talks. I was mentioned as the ‘flaky’ guy:
My main takeaway from the entire conference is that it seems we get way too caught up on complex solutions for our testing. We need to keep asking ourselves: “what’s the simplest thing that could possibly work?” If we have complex systems why do we need complex tests? We need to take each large complex problem we work on and break it down till we get something small and manageable and solve that problem. Rinse and repeat.
Flaky tests are the bugbear of any automated test engineer; as Alister says “insanity is running the same tests over and over again and getting different results”. Flaky tests cause no end of despair, but perhaps there’s no such thing as a flaky or non-flaky test, perhaps we need to look at this problem through a different lens. We should spend more time building more deterministic, more testable systems than spending time building resilient and persistent tests. Alister will share some examples of when test flakiness hid real problems underneath the system, and how it’s possible to solve test flakiness by building better systems.
If you would like to attend you can use the following code: SPEAKER-10-AS to get an extra 10% off the early bird price until 18th September.
New Talk Topic
My new talk is titled ‘The 10 Do’s and 500* Don’ts of Automated Acceptance Testing’
Automated acceptance tests/executable specifications are a key part of sustainable software delivery, but most teams struggle to implement these in an efficient, productive way without hindering velocity. Alister will share a few ways to move towards successful automated acceptance testing, and many traps of automated acceptance testing, so you achieve business confidence of every commit in a continuous delivery environment. *Note: talk may or may not include 500 don’ts.
If you’re a Simpsons fan like me, you may recognize the title from here:
The first ever CukeUp! Australia is being held in Sydney on November 19 and 20, 2015.
I have been selected to speak and my talk is titled ‘Establishing a Self-Sustaining Culture of Quality at Domino’s Digital’.
Just 12 months ago Domino’s had a dedicated manual testing team who performed testing during a dedicated testing phase at the end of each project. Not only did this substantially slow down projects, releases were big and introduced lots of risk despite having been independently tested. Fast forward to today, Domino’s Digital consists of multiple cross-functional teams who are wholly responsible and accountable for quality into and beyond production through regular releases: no testing team, no testing phases, no testing manager. Alister will share the journey of moving to a self-sustaining culture of quality and detail the cosmic benefits the business has received in increasing both quality and velocity across all digital delivery initiatives.
Early-bird tickets are available now. Hoping to see those from down under there.
I was lucky enough to make my first trans-Tasman journey to Auckland last week to attend the 2015 ANZTB Conference. The conference was enjoyable and there were some memorable talks I really enjoyed (I personally like single-stream events). Here’s some favorites:
I loved the essence of this talk which was basically (in my own words) ‘take security testing off the pedestal’. Laura shared five simple tools and techniques to make security more accessible for developers and testers alike. One key takeaway for me was to focus on getting the language right: ‘security vulnerabilities hide behind acronyms, jargon and assumptions‘. For example, most people understand the different between authentication (providing identity) and authorization (access rights), but both these terms are commonly shortened to ‘auth’ which most people use interchangeably (and confusingly). A great talk.
Innovation through Collaboration – Wil McLellan
This was probably my favorite talk of the day, as it was a well told story about building a collaborative co-working space called ‘EPIC’ for IT start-ups in Christchurch following the 2011 earth quake. The theme was how collaboration encourages innovation, and even companies in competition benefit through collaboration. My key takeaway was how designing a space you can encourage collaboration, for example, in EPIC there’s only a single kitchen for the whole building, and each tenancy doesn’t has it’s own water. So, if someone wants a drink or something to eat they need to visit a communal area. Doing this enough times means you start interacting with others in the building you wouldn’t normally do so in your day to day work.
Sarah is the Head of Accessibility Services at PwC in Sydney and she shared some good background information about why accessibility is important and some of the key resources to analyse/evaluate and improve accessibility of systems. Whilst I knew most of the resources she mentioned, I thought here talk was very well put together.
Well done to the team that organized the conference.
Auckland was a beautiful city BTW, here’s a couple of pics I took:
I was lucky enough the attend the Google Test Automation Conference (GTAC) at Google Kirkland in Washington last week. As usual, it was a very well run conference with an interesting mix of talks and attendees.
Whilst there wasn’t an official theme this year, I personally saw two themes emerge throughout the two days: dealing with flaky tests and running automated tests on real mobile devices.
There wasn’t too many talks that didn’t mention flaky automated tests (known as ‘flakes’) at some point. Whilst there seemed to be some suggestions for dealing with flaky tests (like Facebook running new tests x times to see if they fail and classify them as flaky and assign to the owner to fix), there didn’t seem to be a lot of solutions for avoiding the creation of flaky tests in the first place which I would have liked to see.
Real Mobile Devices
The obsession of running mobile automated tests on real devices continued from last year’s conference with talks about mobile devices as a service. I personally think we’d be better spending the time and effort on making more realistic mobile emulators that we can scale rather than continuing the real device test obsession.
My key takeaway was even highly innovative companies like Google, Facebook and Netflix still struggle balancing software quality and velocity. In Australia, these companies don’t have a strong presence here, and often the IT management of smaller companies here like to say things like “Google does x” or “Facebook does y”. The problem with this is they only know these companies from the outside. Ankit Mehta’s slides at the beginning of his keynote captured this perfectly and hence were my favorite slides of the conference:
At the last Brisbane Software Testers meetup I volunteered to do a 5 minute lightning talk. Since I’ve read a lot of books lately I thought I would share what I had read and some of the key snippets and set my set a challenge of talking about 5 books using 5 slides in 5 minutes.
Unfortunately some of the other volunteers for lightning talks withdrew so I had a longer window and ended up talking way longer (including some bonus slides about Think Like A Freak).
I am keen to try this again using 5 books I have since read to see if it’s actually possible to communicate this amount of information. My slides are below and are also available in PDF format (accessible).
“The problem isn’t information overload, Clay Shirky famously said, it’s filter failure. Lately, though, I’m more worried about filter success. Increasingly my filters are being defined for me by systems that watch my behavior and suggest More Like This. More things to read, people to follow, songs to hear. These filters do a great job of hiding things that are dissimilar and surprising. But that’s the very definition of information! Formally it’s the one thing that’s not like the others, the one that surprises you.”
Our sophisticated community based filters have created echo chambers around the software testing profession.
“An echo chamber is a situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, often drowning out different or competing views.” ~ Wikipedia
I’ve seen a couple of echo chambers have evolved:
The context driven testing echo chamber where the thoughts of a couple of the leaders are amplified and reinforced by the followers (eg. checking isn’t testing)
The broader software testing echo chamber where testers define themselves as testers and are only interesting in hearing things from other testers (eg. developers are evil and can’t test)
The agile echo chamber where anything agile is good and anything waterfall is bad (eg. if you’re not doing continous delivery you’re not agile)
So how do we break free of these echo chambers we’ve built using our sophisticated filters? We break those filters!
Jon has some great suggestions in his article (eg. dump all your regular news sources and view the world through a different lens for a week) and I have some specific to software testing:
attend a user group or meetup that isn’t about software testing – maybe a programming user group or one for business analysts: I attend programming user groups here in Brisbane;
learn to program, or manage a project, or write CSS.
attend a conference that isn’t about context driven testing: I’m attending two conferences this year, neither are context driven testing conferences (ANZTB Sydney and JSConf Melbourne);
follow people on twitter who you don’t agree with;
read blogs from people who you don’t agree with or have different approaches;
don’t immediately agree (or retweet, or ‘like’) something a ‘leader’ says until you validate it actually makes sense and you agree with it;
don’t be afraid to change your mind about something and publicize that you’ve changed your mind; and
It’s about that time of year where I am starting to think about which testing conference I would like to attend next year. I use my professional development budget to attend one conference a year which can be almost anywhere in the world. I will propose a talk at my chosen conference, which, if accepted, is a bonus to attend. Here’s a short list of conferences I am considering for next year:
CAST 2014: August 11-13 NYC, USA – 1 day tutorials/2 days conference – one stream – bonus points for being in NYC – submissions close January 10, 2014
CITCON Oceania 2014: February 21-22 Auckland, NZ – 2 days unconference – multiple groups – been to one before which was good but I personally don’t like the unconference format with no set speakers
Let’s Test Oz 2014: September 15-17 Blue Mountains, Australia – 3 days conference – multiple streams – concerned it will be dominated by certain opinionated keynote speakers and not practical enough for my liking, location is an issue for me traveling from Queensland – submissions close January 15, 2014
Magma Conf 2014: June 4-6 Manzanillo, Colima, Mexico – 3 days conference – not exactly a testing conference but a web development conference with talks about testing – submissions close December 30, 2013
EuroSTAR 2014: November 24-27 Dublin, Ireland – 4 days conference/workshops – would be cool to visit Ireland – submissions close February 14, 2014
QCon Conferences 2014: London, NYC, San Francisco – Various Dates – more focussed on software development/agile but testing content
Selenium Conference 2014: Date TBC, Location TBC – usually 2 days conference + 1 day workshop – I attended in 2012 in SF and it was quite good but obviously just about WebDriver and not testing. Not sure if it in Europe in 2014 as it was in 2012