I read a LinkedIn blog post from 2015 by Keqiu Hu from LinkedIn about flaky UI tests. He explains how they fixed their flaky UI tests for the LinkedIn app. Among other things they implemented what they called the “Trunk Guardian service” which runs automated UI tests on the last known good build twice and if the test passes on the first run but fails on the second it is marked as ‘flaky’ and disabled and the owner is notified to fix it or get rid of it. I wondered what your thoughts were on such a “Trunk Guardian service” – if the culture / process was in place to solve the other issues that create flaky tests, could such a thing be worth the effort to implement? Article: Test Stability – How We Make UI Tests Stable
Continue reading “AMA: Trunk Guardian Service?”
We actually don’t run any tests in Internet Explorer any more since these weren’t finding any browser specific bugs (we do exploratory testing in Internet Explorer instead).
this.driver.executeScript( 'return arguments.click();', webElement );
I hope this solution helps!
I’ve never personally found the return on investment of getting automated tests running across Internet Explorer and Safari to be worthwhile as in my experience this took more effort than the bugs it found. So I personally stick to running our full e2e test suite in our most used browser (Chrome) and supplementing this with exploratory testing on all other browsers.
In saying that the reason you won’t be able to use Docker containers for these purposes is that they’re Linux and Internet Explorer requires Microsoft Windows and Safari requires Apple macOS to be able to run. To be able to use these for your existing automated tests you can sign up to a on-demand browser service like SauceLabs and use the remote WebDriver protocol to execute your tests.
Do you have set up (inexpensive) infrastructure to store data collected in your automated tests? We are currently using using selenium Java webdriver to automate our tests and IntelliJ as our IDE. We create data from scratch for each and every test case :(
I’m a little confused by the question and whether it’s about test data: data is that is needed by the automated tests, or test results data: insights into the results of our automated tests. So I’ll answer both 😀
Infrastructure to manage test data
Our tests run on specific test accounts and sites on production databases. Since our tests are end-to-end in fashion, we try to make our tests have as few dependencies as possible on existing data. Often an end-to-end scenario will involve creating, viewing, editing and deleting something. If we don’t do all of this by our UI we can use hooks that either use services or database jobs to clean up the data. I explained this in more detail previously.
Infrastructure to manage test results data
We use CircleCI for automated end-to-end tests. We have a number of projects that run different types of end-to-end tests from the same code repository for different purposes (canary tests, visual-diff tests, full regression tests for example).
We generate x-unit test results (from Mocha/Magellan) which CircleCI uses to provide insights into our test results such as this:
You can also drill down into slowest tests and most failed tests etc.
Since all our tests are open source you can view these build insights yourself!
We’re pretty happy with the insights we get from CircleCI at the moment so we don’t see a need to currently develop anything ourself.
Chromedriver/Chrome is pretty great at executing WebDriverJs scripts without taking away your focus (so you can execute them in the background whilst doing other things), the one exception I found was selecting items in a select list. I found it would do this:
Continue reading “WebDriverJs Select Lists in Chrome”
Feature toggles aren’t just for production code. Feature toggles are also a powerful technique to change the behaviour of your automated end-to-end tests without changing code.
Continue reading “Feature Toggles for Automated e2e Tests”
If you saw my talk at GTAC last year, ‘your tests aren’t flaky‘, then you’re probably aware of my view on flaky tests actually being indicative of broader application/systems problems that we should address over making our tests less flaky.
But what if you’re in a situation where you work with a system where you can’t feasibly improve the reliability? Say you’ve got a domains page that should show you a list of available domains but since it’s using an external third-party service it sometimes just shows nothing?
Continue reading “Prioritising Test Reliability over Perfection”