AMA: Separate Repository for e2e Tests?

Liam asks…

“I did enjoy reading the article about e2e test on wordpress. I noted that e2e test are in a separate repo.

My question will be what is the workflow to make sure new changes does not break the e2e test on pull request ?

For example, if a developer work on some changes, then they need to change the e2e test first and make sure everything pass, however the environment on the pull request might not be stable, developer can overwrite each other changes”

My response…

Thanks for your question Liam.

We have reasons for and benefits in having the WordPress.com e2e tests in a separate repository:

  1. The e2e tests test the entire WordPress.com experience so these test things that happen in different repositories (for example our Calypso user interface or services/API) and having them in the user interface repository isn’t really representative of what the breadth of their scope;
  2. Making changes to the e2e tests are easier in a separate repository since we don’t have to deploy e2e PRs that don’t contain functional changes (we deploy every merge to our master branch immediately dozens of times per day)

The obvious downsides are:

  1. How do we make sure e2e tests know about incoming AB tests?
  2. How do we couple new changes to updates in the e2e test repository?

For incoming AB tests we make sure that our e2e tests know about the change by ensuring we create a matching PR in our e2e tests that override our AB tests during our test runs.

If someone updates the AB tests in Calypso they’re politely reminded to update the e2e tests:

Screen Shot 2018-09-19 at 3.31.47 pm
example prompt

For making sure e2e tests are up to date we automatically run two (of about 40 total) of the most critical e2e tests (in three browsers) when a PR is ready to be reviewed. These can fail and indicate a change is necessary to the e2e tests (or something is broken!)

There’s also a label we can add to any PR that runs the entire set of e2e tests against a PR running live and reports the result back to that PR:

Screen Shot 2018-09-19 at 3.38.29 pm
e2e Test Results against a Calypso PR

If changes are required to the e2e tests someone can create an e2e PR with the exact same branch name which will be used to run against the feature changes before they are merged. This means PRs can be coupled and tested together but merged separately.

To answer the second part of your question I understand it to be about conflicting changes? One of our key philosophies for work is “merge early and merge often” so we make sure that PRs are short-lived and merged quickly to minimize the chance of conflicts. These still do happen occasionally, we just deal with them as they come up.

Whilst there’s been some downsides to the separate repositories all-in-all the benefits continue to outweigh the downsides but we constantly assess and at any point in time we can easily merge them if need be.

npm ci

I recently discovered npm ci which you can use instead of npm install when running a node project on continuous integration (CI) system and want to install your npm dependencies. It does this in a more lightweight, more CI friendly way.

If you use npm test to run your tests, this can be shortened to npm t (much like npm i is npm install), and therefore you can run npm cit to install dependencies and run tests in CI.

Running Mocha e2e Tests in Parallel

I recently highlighted the importance of e2e test design. Once you have well designed e2e tests you can start running them in parallel.

There are a couple of approaches to scaling your tests out to be run in parallel:

  1. Running the tests in multiple machine processes;
  2. Running the tests across multiple (virtual) machines;

These aren’t mutually exclusive, you can run tests in parallel processes across multiple virtual machines – we do this at Automattic – each test run happens across two virtual machines, Docker containers on CircleCI, each of which runs six processes for either desktop or mobile responsive browser sizes depending on the container.

I have found running tests in multiple processes gives best bang for buck since you don’t need additional machines (most build systems charge based on container utilisation) and you’ll benefit from parallel runs on a local machine.

We write our e2e tests in WebDriverJs and use Mocha for our test framework. We currently use a tool called Magellan to run our e2e tests in separate processes (workers), however the tool is losing Mocha support and therefore we need to look at alternatives.

I have found that mocha-parallel-tests seems like the best replacement – it’s a drop in replacement runner for mocha tests which splits test specification files across processes available on your machine you’re executing your tests on – you can also specify a maximum limit of processes as the command line argument --max-parallel

There is another parallel test runner for mocha: mocha.parallel – but this requires updating all your specs to use a different API to allow the parallelisation to work. I like mocha-parallel-tests as it just works.

I’ve updated my webdriver-js-demo project to use mocha-parallel-tests – feel free to check it out here.

Running e2e Tests in Parallel

One of the best ways to speed up your end-to-end (e2e) tests is to start running them in parallel.

The main issue I see that prevents teams from fully using parallelism for their e2e tests is lack of test design. Without adequately designed e2e tests – which have been designed to be run in parallel – parallelism can introduce non-deterministic and inconsistent test results – leading to frustration and low-confidence in the e2e tests.

This is often the case when teams go about directly converting manual test cases into automated e2e tests – instead of approaching e2e test automation with a specific end-to-end design focus.

Say you had a manual test case for inviting someone to view your WordPress blog:

  1. Enter the email address of the person you’d like to follow your site
  2. Check the list shows a pending invite
  3. Check your email inbox shows a pending invite
  4. Open the email, follow the link and sign up for a new account

When you’re manually testing this in sequence it’s easy to follow – but as soon as you start running this in parallel, across different builds, and with other tests things will most likely start failing.

Why? The test isn’t specific enough – you may have multiple pending invites – so how do you know which one is which? You can only invite someone once, how do you generate new invite emails? You may receive multiple emails at any one time – which one is which? And more.

With appropriate e2e test design you can write the e2e test to be consistent when run regardless of parallelism:

  1. Email addresses are uniquely generated for each test run using a GUID and either a test email API like Mailosaur or Gmail plus addressing; and
  2. The pending email list has a data attribute on each invite clearly identifying which email the invite is for and this is used to verify pending email status; and
  3. The inbox is filtered by the expected GUID, and only those emails are used. Etc.

Once you have good e2e test design in place you’re able to look at how to speed up e2e test execution using parallelism. I’ll cover how to do this in my next blog post.

Bailing with Mocha e2e Tests

At Automattic we use Mocha to write our end-to-end (e2e) automated tests in JavaScript/Node.js. One issue with Mocha is that it’s not really a tool suited to writing e2e tests where one test step can rely on a previous test step – for example our sign up process is a series of pages/steps which rely on the previous step passing. Mocha is primarily a unit testing tool and it’s bad practice for one unit test to depend on another, so that is why Mocha doesn’t support this.

A more simplified example of this is shown in my webdriver-js-demo project:

describe( 'Ralph Says', function() {
	this.timeout( mochaTimeoutMS );

	before( async function() {
		const builder = new webdriver.Builder().withCapabilities( webdriver.Capabilities.chrome() );
		driver = await builder.build();
	} );

	it( 'Visit the page', async function() {
		page = await RalphSaysPage.Visit( driver );
	} );

	it( 'shows a quote container', async function() {
		assert( await page.quoteContainerPresent(), 'Quote container not displayed' );
	} );

	it( 'shows a non-empty quote', async function() {
		assert.notEqual( await page.quoteTextDisplayed(), '', 'Quote is empty' );
	} );

	afterEach( async function() { await driver.manage().deleteAllCookies(); } );

	after( async function() { await driver.quit(); } );
} );

Continue reading “Bailing with Mocha e2e Tests”

Using async/await with WebDriverJs

We’ve been using WebDriverJs for a number of years and the control flow promise manager that it offers to make writing WebDriverJs commands in a synchronous blocking way a bit easier, particularly when using promises.

The problem with the promise manager is that it is hard to understand its magic as sometimes it just works, and other times it was very confusing and not very predictable. It was also harder to develop and support by the Selenium project so it’s being deprecated later this year.

Fortunately recent versions of Node.js support asynchronous functions and use of the await command which makes writing WebDriverJs tests so much easier and understandable.

I’ve recently updated my WebDriverJs demo project to use async/await so I’ll use that project as examples to explain what is involved.

WebDriverJs would allow you to write consecutive statements like this without worrying about waiting for each statement to finish – note the use of test.it instead of the usual mocha it function:

test.it( 'can wait for an element to appear', function() {
	const page = new WebDriverJsDemoPage( driver, true );
	page.waitForChildElementToAppear();
	page.childElementPresent().then( ( present ) => {
		assert( present, 'The child element is not present' );
	} );
} );

When you were waiting on the return value from a promise you could use a .then function to wait for the value as shown above.

This is quite a simple example and this could get complicated pretty quickly.

Since the promise manager is being removed, we need to update our tests so they continue to execute in the correct order. We can make the test function asynchronous by adding the async prefix, remove the test. prefix on the it block, and add await statements every time we expect a statement to finish before continuing:

it( 'can wait for an element to appear', async function() {
	const page = new WebDriverJsDemoPage( driver, true );
	await page.waitForChildElementToAppear();
	assert( await page.childElementPresent(), 'The child element is not present' );
} );

I personally find this much easier to read and understand, less ‘magic’, but the one bit that stands out is visiting the page and creating the new page object. The code in the constructor for this page, and other pages, is asynchronous as well, however we can’t have an async constructor!

export default class BasePage {
	constructor( driver, expectedElementSelector, visit = false, url = null ) {
		this.explicitWaitMS = config.get( 'explicitWaitMS' );
		this.driver = driver;
		this.expectedElementSelector = expectedElementSelector;
		this.url = url;

		if ( visit ) this.driver.get( this.url );

		this.driver.wait( until.elementLocated( this.expectedElementSelector ), this.explicitWaitMS );
	}
}

How we can get around this is to define a static async function that acts as a constructor and returns our new page object for us.

So, our BasePage now looks like:

export default class BasePage {
	constructor( driver, expectedElementSelector, url = null ) {
		this.explicitWaitMS = config.get( 'explicitWaitMS' );
		this.driver = driver;
		this.expectedElementSelector = expectedElementSelector;
		this.url = url;
	}

	static async Expect( driver ) {
		const page = new this( driver );
		await page.driver.wait( until.elementLocated( page.expectedElementSelector ), page.explicitWaitMS );
		return page;
	}

	static async Visit( driver, url ) {
		const page = new this( driver, url );
		if ( ! page.url ) {
			throw new Error( `URL is required to visit the ${ page.name }` );
		}
		await page.driver.get( page.url );
		await page.driver.wait( until.elementLocated( page.expectedElementSelector ), page.explicitWaitMS );
		return page;
	}
}

In our Expect and Visit functions we call new this( driver ) which creates an instance of the child class which suits our purposes. So, this means our spec now looks like:

it( 'can wait for an element to appear', async function() {
	const page = await WebDriverJsDemoPage.Visit( driver );
	await page.waitForChildElementToAppear();
	assert( await page.childElementPresent(), 'The child element is not present' );
} );

which means we can await visiting and creating our page objects and we don’t have any asynchronous code in our constructors for our pages. Nice.

Once we’re ready to not use the promise manager we can set SELENIUM_PROMISE_MANAGER to 0 and it won’t use it any more.

Summary

The promise manager is being removed in WebDriverJs but using await in async functions is a much nicer solution anyway, so now is the time to make the move, what are you awaiting for? 😊

Check out the full demo code at https://github.com/alisterscott/webdriver-js-demo

Executing JS in IE11 using WebDriverJs

We write our e2e tests in JavaScript running on Node.js which allows us to use newer JavaScript/ECMAScript features like template literals.

We have a subset of our e2e tests – mainly signing up as a new customer – which we run a few times a day against Internet Explorer 11: our lowest supported IE version.

I recently added a function that sets a cookie to set the currency for a customer:

setCurrencyForPayments( currency ) {
  const setCookieCode = function( currencyValue ) {
    window.document.cookie = `landingpage_currency=${ currencyValue };domain=.wordpress.com`;
  }
return this.driver.executeScript( setCookieCode, currency );
}

This code works perfectly when executing against Chrome or Firefox, but when it came to executing against IE11 I would see the following (rather unhelpful) error:

Uncaught JavascriptError: JavaScript error (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 69 milliseconds

I couldn’t work out what was causing this so I decided to take a break. On my break I realised that WebDriverJs is trying to execute a new JavaScript feature (template literals) against an older browser that doesn’t support it! Eureka!

So all I had to do was update our code to:

setCurrencyForPayments( currency ) {
  const setCookieCode = function( currencyValue ) {
    window.document.cookie = 'landingpage_currency=' + currencyValue + ';domain=.wordpress.com';
  }
return this.driver.executeScript( setCookieCode, currency );
}

and all our tests were happy against IE11 again 😊

Having not found a lot about this error or the cause I am hoping this blog post can help someone else out if they encounter this issue also.

Creating a skills-matrix for t-shaped testers

I believe the expression “jack of all trades, master of none” is a misnomer, as I’ve mentioned previously. Being good at two or more complimentary skills is better than being excellent at just one, in my opinion.

But what about being excellent at one skill, and still being good at two or more? Why can’t we be both?

Jason Yip describes a T-shaped person and the benefits that having t-shaped people on teams brings:

A T-shaped person is capable in many things and expert in, at least, one.
As opposed to an expert in one thing (I-shaped) or a “jack of all trades, master of none” generalist, a “t-shaped person” is an expert in at least one thing but also somewhat capable in many other things. An alternate phrase for “t-shaped” is “generalizing specialist”.

jason yip
image by Jason Yip

Ideally we’d like to have a team of t-shaped testers in Flow Patrol at Automattic. But how do we get to this end goal?

I recently embarked on an exercise to measure and benchmark our skills and do just this with our team. Here’s the steps we took.

Step One – Devise Desired Team Skills

The first thing we did was come up with a list of skills that we have in the team and would like to have in the team. These can be ‘hard’ skills like a specific programming languages and ‘soft’ skills like triaging bugs. In a standard co-located team this would be as easy as conducting a brainstorming session and using affinity grouping to discover these skills. In a distributed environment I wrote a blog post to my team’s channel and had individual members comment with a list of skills they thought appropriate, and then I did the grouping and came up with a draft list of skills and groups.

Step Two – Self-assess against a team skills matrix

Once I had a final list of skills and groups (see below for full list), I put together a matrix (in a Google Spreadsheet) that listed team members on the x-axis, and the skills on the y-axis, and came up with a skill level rating. Our internal systems use a three level scale (Newbie, Comfortable, Expert) which we didn’t think was broad enough so we decided upon five levels:

1. Limited
2. Basic
3. Good
4. Strong
5. Expert

 

skills_matrix
Team Skills Matrix

I hadn’t seen Jason yip’s visual representation at that point in time, otherwise I may have used something like that, which has five similar levels:

matrix jason yip
Image by Jason Yip

Step Three – Publish results and cross-skill

Once we had the self assessments done we could then publish the data within our organisation and use the benchmark to cross-skill people in the team. In a co-located environment this could involve pair programming, in a distributed one it could involve mentoring and reviewing other team member’s work.

Have you done a skills matrix for your team? How did you do it? What did you discover?


Full List of Skills and Skill Groups for Flow Patrol at Automattic

Automattic Product Knowledge
WordPress Core
WordPress.com Simple Sites
WordPress.com Atomic Sites
Jetpack
Woocommerce
Simplenote
Mobile Apps
Human Software Testing
Flow Mapping
Bug Triage & Prioritization
Exploratory Testing (pre-release)
Dogfooding
Cross-browser Cross-device Testing
Facilitating Beta/Community Testing
Facilitating User Testing
Usability Testing
Accessibility Testing
Automated Testing
Automated End-to-end Browser Testing
Automated API/Integration Testing
Automated Unit Testing
Automated Visual Regression Testing
Android Automated Testing
iOS Automated Testing
Programming Languages
JavaScript
PHP
Shell Scripting
Objective C
Swift
Android/Kotlin
Testing Tools/Frameworks
Mocha
WebDriverJS
Git/Github
CircleCI
TravisCI
Team City (CI)
Mailosaur
Applitools
VIP Go
Docker
Other
i18n Testing
Performance Testing
Security Testing
User advocacy – empathy and compassion
Mentoring/onboarding
Project Management
Product Management
Product Development 
Calypso
Jetpack
WP.com API PHP
Woocommerce
iOS App
Android App

 

AMA: JavaScript in Test Automation

Max asks…

First of all thanks for sharing all your insights on this blog. I regularly try to come back to your blog and I helped me grow as a QA Engineer quite a lot.
I wanted to ask you on your opinion on a seemingly new generation of test tools for the web that are written in JavaScript to help deal with all the asynchronicity on modern front end frameworks. Our company is in the process of redesigning our website and it seems that it just gets harder and harder to deal with all the JavaScript in test automation. I recently started looking into some other tool like cypress.io, but as of now they still seem quite immature.
Could you help me and point me in the right direction?

My response…

I think there’s two parts to the JavaScript asynchronicity issue which are not necessarily related.

Modern JavaScript front-end web frameworks (like React and Angular) are designed to be asynchronous and fast and therefore this can cause issues when trying to write automated tests against them. I don’t believe writing automated tests in an asynchronous way actually makes the tests easier to write, or more maintainable or robust, but just a result of writing these tests in the same way the web frameworks work.

You can write synchronous tests (like using Watir/Ruby) against asynchronous web interfaces, you just need to use the waiting/polling mechanisms (or write/extend your own) – the same as you need to do in asynchronous test automation tools.

We choose WebDriverJS for automated end-to-end testing of our React application as it was the official WebDriver for JavaScript project and seems to be a good choice at the time. I somewhat regret that decision now as using a synchronous third party implementation like webdriver.io seems like an easier way to write and maintain tests.

I have tried to use cypress.io but the way it controls sites (through proxies) has (current) limitations like not working on iFrames and cross-domains which are deal-breakers for our end-to-end testing needs at present.

If you don’t need to write your end-to-end tests in JavaScript I’d say avoid it unless absolutely necessary and stick to another non-asynchronous programming language.

I’m glad I’ve been able to help you grow over the years.

AMA: hiding/changing IP addresses using Watir

Mudit Bhardwaj asks…

Hey! In order to test the security of a website, I’m trying to create hundreds of accounts. However, after a certain limit, there is always an error which prevents me from going further.
How can I hide/change my ip address during each iteration with watir?

My response…

I can understand why there would be this limitation in place as this activity seems suspicious from a systems perspective.

Watir can’t change the IP address. You can use an anonymous browser profile and/or delete cookies for different account creation runs but this probably won’t help.

I’d speak to your development/devops/systems team to whitelist your IP address(es) you are using for this purpose.

Puppeteer for Automated e2e Chrome Testing in Node

I recently noticed the new Google Chrome project Puppeteer:

Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome or Chromium.

As someone who only runs WebDriver tests in Google Chrome anyway, this looks like a promising project that bypasses WebDriver to have full programmatic control of Google Chrome including for automated end-to-end (e2e) tests.

The thing I really love about this is no Chromedriver dependency and how installing the library installs Chromium by default which can be controlled headlessly with zero config or any other dependencies.

You can even develop scripts using this playground.

I set up a (very basic) demo project that uses Mocha + Puppeteer and it runs on CircleCI with zero config. Awesome.

AMA: Clicking a non-visible element in Watir

Mike asks…

Is there any way to make Watir click a link/button that is not visible? I wanted to switch from Capybara/CapybaraWebKit (which allows clicking non-visible elements) but I am stuck since Watir always times out on the click attempt.

My response…

The easiest way to do this is just to execute a click in JavaScript on the object itself.

In Watir, this looks something like:

browser.execute_script( "return arguments[0].click();", browser.link(:id => 'blah')

Hope this helps!

AMA: automated unit vs component tests

Jason asks…

Hi Alister, Love your blog and the content. I have matured my knowledge in test automation, and without even meaning to, created a very similar test automation pyramid you derived. From it, though, i have a difficult time when trying to educate the development team the nuances between their unit level automated tests and component automated tests. How would you go about differentiating between the two? Thanks for your time! – JH

My response…

My understanding is that unit and component testing are similar but differ in their focus. For example, say I was building a table, it would consist of many parts or units:

1 x tabletop
4 x leg brackets
4 x table legs
8 x bolts

Unit testing would be testing each individual part (or unit) to make sure it is good quality but ignoring anything it connects to or requires.

Component testing would be broader in that whether the table legs work with the leg brackets as leg components, and whether the bolts with work the tabletop. I would call this component testing.

Finally testing the table fits together as whole I would call system testing, how it looks in a room or what it’s like to use: end-to-end or user acceptance testing.

When I was developing a Minesweeper game I wrote unit tests for the smallest units (eg. cells) and then component tests for groupings of cells (fields) and system tests for the game itself (interacting with fields).

The reason to do component testing is that it’s more realistic than unit testing so it’s likely to find problem where units interact. The downsides is it’s takes more time to execute and can be harder to isolate problems when they occur.

I hope this helps.

→ How Canaries Help Us Merge Good Pull Requests

I recently published an article on the WordPress.com Developer’s Blog about how we run automated canary tests on pull requests to give us confidence to release frequent changes without breaking things. Feel free to check it out.

AMA: Difference between explicit and fluent wait

Anonymous asks…

What is the difference between Explicit wait and Fluent wait?

My response…

I hadn’t heard of fluent waiting before, only explicit and implicit waiting.

From my post about Waiting in C# WebDriver:

Implicit Waiting

Implicit, or implied waiting involves setting a configuration timeout on the driver object where it will automatically wait up to this amount of time before throwing a NoSuchElementException.

The benefit of implicit waiting is that you don’t need to write code in multiple places to wait as it does it automatically for you.

The downsides to implicit waiting include unnecessary waiting when doing negative existence assertions and having tests that are slow to fail when a true failure occurs (opposite of ‘fail fast’).

Explicit Waiting

Explicit waiting involves putting explicit waiting code in your tests in areas where you know that it will take some time for an element to appear/disappear or change.

The most basic form of explicit waiting is putting a sleep statement in your WebDriver code. This should be avoided at all costs as it will always sleep and easily blow out test execution times.

WebDriver provides a WebDriverWait class which allows you to wait for an element in your code.

As for fluent waits, according to this page it’s a type of explicit wait with more limited conditions on it. I don’t believe WebDriverJs supports fluent waits.

AMA: Dealing with Docker

Howdy! First, thanks to Alister for asking me to guest-post on his blog. I’m always excited to talk about Docker and its potential to solve all the world’s problems 😉.  He asked me to take on the following question from his AMA:

Alister, What are your thoughts on how containerization should fit into a great development and testing workflow? Have you got behind using Docker in your day to day? Thanks!

One of the oldest problems in software development and testing is that a developer writes code on their desktop, where everything works flawlessly, but when it’s shipped to the test or production environments it mysteriously breaks. Maybe their desktop was running a different version of a specific library, or they had unique file permissions enabled. When used correctly, Docker eliminates the “works on my machine” concern. By packaging the runtime environment configuration along with the source code you ensure that the application executes the same in every instance. And just as important, changes to that configuration are logged and can be easily reverted.

Another concern is how to test specific behavior that only gets executed when your application is running in the production environment. By putting all of your application and test servers in individual containers, you can easily connect them on their own private network and just tell the application that it’s running in production. Obviously this is application unique, and building a copy of the production environment presents its own challenges, but at the core it’s definitely doable within a Docker infrastructure. The important thing is that a network of containers is isolated, so you can do things like set machine hostnames to exactly match their production counterparts without worrying about conflicts.

It all just boils down to consistency…if you can ensure that your developer is writing code against the same configuration as your test environment, which is the same configuration is production – everybody wins.

The second part of the question is a little trickier – Have you got behind using Docker in your day to day?

In some ways the answer is yes. The main application that we test is the Calypso front-end to WordPress.com, which itself is built and runs inside Docker. Our core end-to-end tests also run in a custom Docker container on CircleCI 2.0, so we can define exactly what version of NodeJS and Chrome we’re using to test with. However, some of our other test sets (such as certain WooCommerce and Jetpack tests) still run using the default CircleCI container. And as far as I know nobody on our team actually uses that container for developing tests locally, we typically just run directly on our laptops. The CI server is the first place that actually executes via Docker.

The other piece that’s missing for a full Dockerization of the our test setup is that our Canary tests run against the custom https://calypso.live setup (https://github.com/Automattic/calypso-live-branches) rather than directly building/running Calypso side by side in a container. It’s something I’d like to pursue updating at some point, but in the interim the existing setup works great…and most importantly it’s already built and working, allowing us to focus on other things.

So the long story short here is that containerization is a great technology, and has a ton of potential for solving problems in the dev/test world. We’re just scratching the surface of that potential at Automattic, but even the limited use we’re giving it right now is beneficial and I plan on continuing to dig deeper.

AMA: Moving automated tests from Java to JavaScript

Anonymous asks…

I am currently using a BDD framework with Cucumber, Selenium and Java for automating a web application. I used page factory to store the objects and using them in java methods I wanted to replace the java piece of code with javaScript like mocha or webdriverio. could you share your thoughts on this? can I still use page factory to maintain objects and use them in js files

My response…

What’s the reasoning for moving to JavaScript from Java? Despite having common names, there’s very little otherwise in common (Car is to Carpet as Java is to JavaScript.)

I wouldn’t move for moving sake since I see no benefit in writing BDD style web tests in JavaScript, if anything, e2e automated tests are much harder to write in JavaScript/Node because everything is asynchronous and so you have to deal with promises etc. which is much harder to do than just using Java (or Ruby).

Aside: I still dream of writing e2e tests in Ruby: it’s just so pleasant. But our new user interface is written extensively in JavaScript (React) so it makes sense from a sustainability point of view to use JS over Ruby.

 

Why you should use CSS selectors for your WebDriver tests

I didn’t used to be a fan of CSS selectors for automated web tests, but I changed my mind.

The reason I didn’t use to be a fan of CSS selectors is that historically they weren’t really encouraged by Watir, since the Watir API was designed to find elements by type and attribute, so the Watir API would look something like:

browser.div(:class => 'highlighted')

where the same CSS selector would look like:

div.highlighted

Since WebDriver doesn’t use the same element type/attribute API and just uses findElement with a By selector, CSS selectors make the most sense since they’re powerful and self-contained.

The the best thing about using CSS selectors, in my opinion, is the Chrome Dev Tools allows you to search the DOM using a CSS selector (and XPath selectors, but please don’t use XPath), using Command/Control & F:

chrome css selectors
Using CSS selectors to find elements in Chrome Dev Tools

So you can ‘test’ your CSS in a live browser window before deciding to use it in your WebDriver test.

The downside of using CSS selectors are they’re a bit less self explanatory than explicitly using by.className or by.id.

But CSS selectors are pretty powerful: especially pseudo selectors like nth-of-type and I’ve found the only thing you can’t really do in CSS is select by text value, which you probably shouldn’t be doing anyway as text values are more likely to change (since they’re copy often changed by your business) and can be localised in which case your tests won’t run across different cultures.

The most powerful usage of CSS selectors is where you add your own data attributes to elements in your application and use these to select elements: straightforward, efficient and less brittle than other approaches. For example:

a[data-e2e-value="free"]

How do you identify elements in your WebDriver automated tests?

AMA: Trunk Guardian Service?

Sue asks…

I read a LinkedIn blog post from 2015 by Keqiu Hu from LinkedIn about flaky UI tests. He explains how they fixed their flaky UI tests for the LinkedIn app. Among other things they implemented what they called the “Trunk Guardian service” which runs automated UI tests on the last known good build twice and if the test passes on the first run but fails on the second it is marked as ‘flaky’ and disabled and the owner is notified to fix it or get rid of it. I wondered what your thoughts were on such a “Trunk Guardian service” – if the culture / process was in place to solve the other issues that create flaky tests, could such a thing be worth the effort to implement? Article: Test Stability – How We Make UI Tests Stable

My response…

Continue reading “AMA: Trunk Guardian Service?”

AMA: IE11 Button Clicking in Selenium

Anthony asks…

I have coded to click buttons on IE11/Win7 but the latest version of Selenium IE doesn’t click the buttons correctly most of times. Most of times, it clicks one button below. I thought it might be loading time so added some waiting but still click one button below or two buttons below sometimes. I googled this and found several posting saying Selenium IE doesn’t click buttons well. Now I have moved it to FF but I am still wondering why IE is not accurate. I know a lot of Selenium test developers in the field but they are having the same issue or they know a workaround. What do you think of this issue on IE11? Are you aware of this issue? FYI, the buttons are not regular HTML tag. The menu system with clickable tag is created by javascript. Thank you!

My Response…

We actually don’t run any tests in Internet Explorer any more since these weren’t finding any browser specific bugs (we do exploratory testing in Internet Explorer instead).

But, I have heard of problems generally with the IEDriver tool. If you’re working on a JavaScript generated app I think the best thing for you to do would instead of using a native click in Selenium is instead execute a JavaScript click event. The exact syntax will depend on which language you’re using Selenium in, but it should look something like this:

this.driver.executeScript( 'return arguments[0].click();', webElement );

I hope this solution helps!