I recently published an article on the WordPress.com Developer’s Blog about how we run automated canary tests on pull requests to give us confidence to release frequent changes without breaking things. Feel free to check it out.
What is the difference between Explicit wait and Fluent wait?
I hadn’t heard of fluent waiting before, only explicit and implicit waiting.
From my post about Waiting in C# WebDriver:
Implicit, or implied waiting involves setting a configuration timeout on the driver object where it will automatically wait up to this amount of time before throwing a NoSuchElementException.
The benefit of implicit waiting is that you don’t need to write code in multiple places to wait as it does it automatically for you.
The downsides to implicit waiting include unnecessary waiting when doing negative existence assertions and having tests that are slow to fail when a true failure occurs (opposite of ‘fail fast’).
Explicit waiting involves putting explicit waiting code in your tests in areas where you know that it will take some time for an element to appear/disappear or change.
The most basic form of explicit waiting is putting a sleep statement in your WebDriver code. This should be avoided at all costs as it will always sleep and easily blow out test execution times.
WebDriver provides a WebDriverWait class which allows you to wait for an element in your code.
As for fluent waits, according to this page it’s a type of explicit wait with more limited conditions on it. I don’t believe WebDriverJs supports fluent waits.
Howdy! First, thanks to Alister for asking me to guest-post on his blog. I’m always excited to talk about Docker and its potential to solve all the world’s problems 😉. He asked me to take on the following question from his AMA:
Alister, What are your thoughts on how containerization should fit into a great development and testing workflow? Have you got behind using Docker in your day to day? Thanks!
One of the oldest problems in software development and testing is that a developer writes code on their desktop, where everything works flawlessly, but when it’s shipped to the test or production environments it mysteriously breaks. Maybe their desktop was running a different version of a specific library, or they had unique file permissions enabled. When used correctly, Docker eliminates the “works on my machine” concern. By packaging the runtime environment configuration along with the source code you ensure that the application executes the same in every instance. And just as important, changes to that configuration are logged and can be easily reverted.
Another concern is how to test specific behavior that only gets executed when your application is running in the production environment. By putting all of your application and test servers in individual containers, you can easily connect them on their own private network and just tell the application that it’s running in production. Obviously this is application unique, and building a copy of the production environment presents its own challenges, but at the core it’s definitely doable within a Docker infrastructure. The important thing is that a network of containers is isolated, so you can do things like set machine hostnames to exactly match their production counterparts without worrying about conflicts.
It all just boils down to consistency…if you can ensure that your developer is writing code against the same configuration as your test environment, which is the same configuration is production – everybody wins.
The second part of the question is a little trickier – Have you got behind using Docker in your day to day?
In some ways the answer is yes. The main application that we test is the Calypso front-end to WordPress.com, which itself is built and runs inside Docker. Our core end-to-end tests also run in a custom Docker container on CircleCI 2.0, so we can define exactly what version of NodeJS and Chrome we’re using to test with. However, some of our other test sets (such as certain WooCommerce and Jetpack tests) still run using the default CircleCI container. And as far as I know nobody on our team actually uses that container for developing tests locally, we typically just run directly on our laptops. The CI server is the first place that actually executes via Docker.
The other piece that’s missing for a full Dockerization of the our test setup is that our Canary tests run against the custom https://calypso.live setup (https://github.com/Automattic/calypso-live-branches) rather than directly building/running Calypso side by side in a container. It’s something I’d like to pursue updating at some point, but in the interim the existing setup works great…and most importantly it’s already built and working, allowing us to focus on other things.
So the long story short here is that containerization is a great technology, and has a ton of potential for solving problems in the dev/test world. We’re just scratching the surface of that potential at Automattic, but even the limited use we’re giving it right now is beneficial and I plan on continuing to dig deeper.
I didn’t used to be a fan of CSS selectors for automated web tests, but I changed my mind.
The reason I didn’t use to be a fan of CSS selectors is that historically they weren’t really encouraged by Watir, since the Watir API was designed to find elements by type and attribute, so the Watir API would look something like:
browser.div(:class => 'highlighted')
where the same CSS selector would look like:
Since WebDriver doesn’t use the same element type/attribute API and just uses
findElement with a
By selector, CSS selectors make the most sense since they’re powerful and self-contained.
The the best thing about using CSS selectors, in my opinion, is the Chrome Dev Tools allows you to search the DOM using a CSS selector (and XPath selectors, but please don’t use XPath), using Command/Control & F:
So you can ‘test’ your CSS in a live browser window before deciding to use it in your WebDriver test.
The downside of using CSS selectors are they’re a bit less self explanatory than explicitly using
But CSS selectors are pretty powerful: especially pseudo selectors like
nth-of-type and I’ve found the only thing you can’t really do in CSS is select by text value, which you probably shouldn’t be doing anyway as text values are more likely to change (since they’re copy often changed by your business) and can be localised in which case your tests won’t run across different cultures.
The most powerful usage of CSS selectors is where you add your own data attributes to elements in your application and use these to select elements: straightforward, efficient and less brittle than other approaches. For example:
How do you identify elements in your WebDriver automated tests?
I read a LinkedIn blog post from 2015 by Keqiu Hu from LinkedIn about flaky UI tests. He explains how they fixed their flaky UI tests for the LinkedIn app. Among other things they implemented what they called the “Trunk Guardian service” which runs automated UI tests on the last known good build twice and if the test passes on the first run but fails on the second it is marked as ‘flaky’ and disabled and the owner is notified to fix it or get rid of it. I wondered what your thoughts were on such a “Trunk Guardian service” – if the culture / process was in place to solve the other issues that create flaky tests, could such a thing be worth the effort to implement? Article: Test Stability – How We Make UI Tests Stable
We actually don’t run any tests in Internet Explorer any more since these weren’t finding any browser specific bugs (we do exploratory testing in Internet Explorer instead).
this.driver.executeScript( 'return arguments.click();', webElement );
I hope this solution helps!
What is the difference between iterative and incremental models?
Fortunately I have written an entire post on this exact topic here.
My conclusion was:
We can’t build anything without iterating to some degree: no code is written perfectly the second that it is typed or committed. Even if it looks like a company is incrementally building their software: they’re iteratively building it inside.
We can’t release anything without incrementing to some degree: no matter how small a release is, it’s still an incremental change over the last release. Some increments are bigger because they’ve already been internally iterated upon more, some are smaller as they’re less developed and will evolve over time.
So, we develop software iteratively and release incrementally in various sizes over time.
Data Migration testing from one application to another application. Which way to test best and easy? The new application should be in Salesforce.
This is quite a generic question but I’ll try to answer it the best I can. I usually look at data migrations as three separate steps:
Extract data from the old system
Transform the data to fit the new system
Load the data into the new system
I would test that each step has worked correctly by verifying the data starting in the deepest parts of the system (database tables), moving up into APIs and finally into any user interfaces. I know some CRMs such as Salesforce don’t allow access to database tables so sometimes you can only use APIs or user interfaces to ‘spot check’ data.
I hope this helps you Nathan.
I’ve never personally found the return on investment of getting automated tests running across Internet Explorer and Safari to be worthwhile as in my experience this took more effort than the bugs it found. So I personally stick to running our full e2e test suite in our most used browser (Chrome) and supplementing this with exploratory testing on all other browsers.
In saying that the reason you won’t be able to use Docker containers for these purposes is that they’re Linux and Internet Explorer requires Microsoft Windows and Safari requires Apple macOS to be able to run. To be able to use these for your existing automated tests you can sign up to a on-demand browser service like SauceLabs and use the remote WebDriver protocol to execute your tests.
I wondered if you could tell me what sets exceptional QA testers apart? Not just personality or work ethic traits, but specific skills and programming knowledge that will be very valuable to a team?
I think exceptional QA testers, as explained recently, aren’t people who are exceptional at just one thing, eg. testing, but good at lots of things.
So an exceptional QA tester, in my opinion, will typically have (at least good) skills in the following things:
- Skills in human exploratory testing: an exceptional QA tester has the ability to effectively find the most important bugs fast. Whilst this skill can be developed, I have found it’s mostly a mindset.
- Skills in developing automated tests: an exceptional QA tester will have programming skills needed to develop automated tests and I would recommend these to typically match the programming language(s) that programmers in your organization use. For example, skills in automated testing in .NET if your company primarily uses Microsoft .NET. Although, someone with strong programming skills in one language (eg. ruby) should be able to transfer these skills to another language (eg. python).
- Knowledge/Experience in your business domain: an exceptional QA tester will fully understand your business domain and keep this context in mind whilst testing a product and raising issues. An exceptional tester is always testing your system – just as I am testing WordPress.com publishing this post.
- An empathetic mindset: we design and develop software for real people and real life. An exceptional QA tester will test with this in mind.
iOS11 is out and it supports native in-device screen recording! I’m a huge fan of attaching screen recordings to bug reports (they capture flow!), whether that be animated gifs or videos, and until now to do this on iOS required plugging your iDevice into your Mac via USB and using QuickTime to record the screen. Alas no longer, it’s now easy to do so via control centre!
Here’s a screen recording (no audio), recorded on my iPhone no less, that shows you how 😊
Protip: force touch the screen recording button to enable/disable microphone for narration
Protip #2: use the in built video editor to trim the start and end
Do you have set up (inexpensive) infrastructure to store data collected in your automated tests? We are currently using using selenium Java webdriver to automate our tests and IntelliJ as our IDE. We create data from scratch for each and every test case :(
I’m a little confused by the question and whether it’s about test data: data is that is needed by the automated tests, or test results data: insights into the results of our automated tests. So I’ll answer both 😀
Infrastructure to manage test data
Our tests run on specific test accounts and sites on production databases. Since our tests are end-to-end in fashion, we try to make our tests have as few dependencies as possible on existing data. Often an end-to-end scenario will involve creating, viewing, editing and deleting something. If we don’t do all of this by our UI we can use hooks that either use services or database jobs to clean up the data. I explained this in more detail previously.
Infrastructure to manage test results data
We use CircleCI for automated end-to-end tests. We have a number of projects that run different types of end-to-end tests from the same code repository for different purposes (canary tests, visual-diff tests, full regression tests for example).
We generate x-unit test results (from Mocha/Magellan) which CircleCI uses to provide insights into our test results such as this:
You can also drill down into slowest tests and most failed tests etc.
Since all our tests are open source you can view these build insights yourself!
We’re pretty happy with the insights we get from CircleCI at the moment so we don’t see a need to currently develop anything ourself.
I’ve been working with angular a while now but I have to admit, the testing side throws me. Every time I start to tackle it, I find myself distracted from the test I want to write by all of the things I need to mock. Sometimes it feels like I need to build an entire mock framework to test one feature. Maybe I’m doing it wrong. Any tips appreciated, whether practical or even just the right headspace for approaching it :)
I haven’t worked with angular applications directly but I’ve worked on React applications and I’m guessing the approach to unit testing these will be similar.
I would start as simple as possible with the smallest test that would possibly work. If you find that you need an entire mock framework to test a feature it sounds like your components may need to be broken down further into smaller components as you have too many dependencies that need to be mocked. If you have smaller components that only require a single dependency then these components should be easier to test as the dependencies will be easier to mock.
If retrofitting unit tests into your app is still too difficult you could try to automated some key end-to-end scenarios using Protractor, but I’d discourage you from going overboard since these can quickly get out of hand. You may still benefit from having a few of these tests even if you get unit testing of small components working well to ensure the small components work well together.
I hope this helps you Ben 😊
Automattic uses interesting and fun names for different roles (QA being excellence wrangler). Are there Business Analyst roles in Automattic? If so, what is it called?
At Automattic we differentiate between a Job Title and a Role. My (current) job title is indeed Excellence Wrangler and my role is Code Wrangling. Anyone is free to change their job title to anything they like at any time so we have some fun ones, whereas the roles are pretty static.
It’s been a while since I wrote something on this blog as you could say my life has been a bit complicated.
I was recently having trouble with a complex method in our WordPress.com e2e test page objects, so I used my skills as a developer and wrote a change to our user interface which adds a data attribute to the HTML element.
This meant our page object method immediately went from this:
2. Go out on a regular speaking circuit tour which is going to require multiple days of travel multiple times a year. That’s too disruptive to our own work schedule and to your fellow teammates.
For this exact reason, without knowing, I created my own personal rule2 of only committing to doing one conference presentation per year.
This are my slides during my presentation on distributed testing at the ANZTB Test 2017 Conference in Wellington, New Zealand last Friday 5th May 2017.
“Most of us are anxious pretty much all the time – but frequently imagine that other people aren’t. It’s time to admit the truth. Anxiety is just a basic fact about being human.”
~ Alain de Botton
We are all human, we are all worried and anxious pretty much all the time, people just don’t tell you that they are. We wear masks and we hide it well.
But why do we test like we’re not anxious or worried? Why don’t we test for real life?