Running Mocha e2e Tests in Parallel

I recently highlighted the importance of e2e test design. Once you have well designed e2e tests you can start running them in parallel.

There are a couple of approaches to scaling your tests out to be run in parallel:

  1. Running the tests in multiple machine processes;
  2. Running the tests across multiple (virtual) machines;

These aren’t mutually exclusive, you can run tests in parallel processes across multiple virtual machines – we do this at Automattic – each test run happens across two virtual machines, Docker containers on CircleCI, each of which runs six processes for either desktop or mobile responsive browser sizes depending on the container.

I have found running tests in multiple processes gives best bang for buck since you don’t need additional machines (most build systems charge based on container utilisation) and you’ll benefit from parallel runs on a local machine.

We write our e2e tests in WebDriverJs and use Mocha for our test framework. We currently use a tool called Magellan to run our e2e tests in separate processes (workers), however the tool is losing Mocha support and therefore we need to look at alternatives.

I have found that mocha-parallel-tests seems like the best replacement – it’s a drop in replacement runner for mocha tests which splits test specification files across processes available on your machine you’re executing your tests on – you can also specify a maximum limit of processes as the command line argument --max-parallel

There is another parallel test runner for mocha: mocha.parallel – but this requires updating all your specs to use a different API to allow the parallelisation to work. I like mocha-parallel-tests as it just works.

I’ve updated my webdriver-js-demo project to use mocha-parallel-tests – feel free to check it out here.

Bailing with Mocha e2e Tests

At Automattic we use Mocha to write our end-to-end (e2e) automated tests in JavaScript/Node.js. One issue with Mocha is that it’s not really a tool suited to writing e2e tests where one test step can rely on a previous test step – for example our sign up process is a series of pages/steps which rely on the previous step passing. Mocha is primarily a unit testing tool and it’s bad practice for one unit test to depend on another, so that is why Mocha doesn’t support this.

A more simplified example of this is shown in my webdriver-js-demo project:

describe( 'Ralph Says', function() {
	this.timeout( mochaTimeoutMS );

	before( async function() {
		const builder = new webdriver.Builder().withCapabilities( webdriver.Capabilities.chrome() );
		driver = await builder.build();
	} );

	it( 'Visit the page', async function() {
		page = await RalphSaysPage.Visit( driver );
	} );

	it( 'shows a quote container', async function() {
		assert( await page.quoteContainerPresent(), 'Quote container not displayed' );
	} );

	it( 'shows a non-empty quote', async function() {
		assert.notEqual( await page.quoteTextDisplayed(), '', 'Quote is empty' );
	} );

	afterEach( async function() { await driver.manage().deleteAllCookies(); } );

	after( async function() { await driver.quit(); } );
} );

Continue reading “Bailing with Mocha e2e Tests”

Handling JavaScript alerts when leaving a page with WebDriver

You’ve most probably seen the sometimes-useful-but-often-annoying browser alerts when navigating away from a page:JavaScript onbeforeunload alert

How do we deal with these using WebDriver?

Continue reading “Handling JavaScript alerts when leaving a page with WebDriver”

AMA: Mocha this.timeout Duplication

 Alex C asks…

I notice that the “this.timeout” code is duplicated across the before and test suite. This seems mentally uncomfortable to me and the logical question is: “Why duplicate code that is in a before method?” Right now, the only thing I can think would be an explanation is “because it’s just a workaround to how the Javascript binding does things.”

import assert from 'assert';
import webdriver from 'selenium-webdriver';
import test from 'selenium-webdriver/testing';
import config from 'config';
import RalphSaysPage from '../lib/ralph-says-page.js';

let driver;

const mochaTimeoutMS = config.get( 'mochaTimeoutMS' );

test.before( function() {
	this.timeout( mochaTimeoutMS );
	driver = new webdriver.Builder().withCapabilities( webdriver.Capabilities.chrome() ).build();
} );

test.describe( 'Ralph Says', function() {
	this.timeout( mochaTimeoutMS );

	test.it( 'shows a quote container', function() {
		var page = new RalphSaysPage( driver, true );
		page.quoteContainerPresent().then( function( present ) {
			assert.equal( present, true, 'Quote container not displayed' );
		} );
	} );

	test.it( 'shows a non-empty quote', function() {
		var page = new RalphSaysPage( driver, true );
		page.quoteTextDisplayed().then( function( text ) {
			assert.notEqual( text, '', 'Quote is empty' );
		} );
	} );
} );

test.afterEach( () => driver.manage().deleteAllCookies() );

test.after( () => driver.quit() );

My response…

Mocha has a low default timeout (2000ms) as it wasn’t really written to be used for e2e testing. In the way I’ve written the above code, all the four test. blocks don’t share any instance state, so setting this.timeout in one will not effect the other, hence why I wrote it that way.

Having a few months experience behind me, I can see it’s easy to rewrite the above tests to avoid this duplication (and to have less lines of code):

import assert from 'assert';
import webdriver from 'selenium-webdriver';
import test from 'selenium-webdriver/testing';
import config from 'config';
import RalphSaysPage from '../lib/ralph-says-page.js';

let driver;

const mochaTimeoutMS = config.get( 'mochaTimeoutMS' );

test.describe( 'Ralph Says', function() {
	this.timeout( mochaTimeoutMS );

	test.before( function() {
		driver = new webdriver.Builder().withCapabilities( webdriver.Capabilities.chrome() ).build();
	} );

	test.it( 'shows a quote container', function() {
		var page = new RalphSaysPage( driver, true );
		page.quoteContainerPresent().then( function( present ) {
			assert.equal( present, true, 'Quote container not displayed' );
		} );
	} );

	test.it( 'shows a non-empty quote', function() {
		var page = new RalphSaysPage( driver, true );
		page.quoteTextDisplayed().then( function( text ) {
			assert.notEqual( text, '', 'Quote is empty' );
		} );
	} );

	test.afterEach( () => driver.manage().deleteAllCookies() );

	test.after( () => driver.quit() );
} );

But you still would have to duplicate this across different test files though, even though it’s stored as a config value. I’ve since realised Mocha allows you to globally override this timeout using test/mocha.opts.

AMA: reusing the same browser for different automated tests

Ankitha asks…

Let’s say we have two tests files inside specs folder
1) test1.js — has 3 tests
2)test2.js– has 3 tests

When I do npm test, a seperate browser is opened for running test1.js and second browser is opened up for running test2.js.
Whereas I want only one browser to be opened.
Like perform test1.js tests close the browser, For test2.js is now open browser again run test2.js. I would like to see tests running sequentially. How can this be achieved?

My response…

You may have seen that we recently open-sourced our automated e2e tests for WordPress.com.

One the huge number of benefits this brings if you can see how we do things :)

If you have a look at driverManager.startBrowser, which is what we call from a before hook in each test, you’ll see how we reuse the browser across different tests.

There’s one additional thing you need to do which is close the browser down at the very end. We use a separate file after.js to do this, which we pass to Mocha when running any test.

AMA: JavaScript & Mobile App Automation Tools

Justin Watts asks…

Have you looked at Chimp.js ? Any thoughts? I was very closely following your posts on picking a new framework for Automattic and I was quite surprised when you chose an async tool and stepped away from Cucumber. As a ruby dev looking to get comfortable with testing web-apps in JavaScript, I am torn on what library to dig into.

My response…

I had a quick look at Chimp.js when I was evaluating tools, it looks quite impressive, but quite opinionated and integrated, so I find these types of libraries are good to start with, but quickly become frustrating when you want to start doing advanced things (like the custom Slack reporter we just wrote that pings slack when tests fail with screenshots). I am curious about how Chimp.js is synchronous when the underlying WebDriver.IO tool uses promises?

I think the asynchronous thing is less of a big deal than most people make out. Once you get used to using promises whenever querying a value from the browser, then you just do that. I have set up my own es6 page objects so these take away most of this complication anyway.

The one synchronous tool I did seriously contemplate was webdriver-sync but since it wraps the underlying WebDriver Java driver, it was quite heavyweight as you need Java, the node-java bridge, and I couldn’t get this working on CircleCI which is what Automattic’s uses as its CI platform.

As for stepping away from Cucumber. Automattic’s unit tests are written in Mocha, so that was a logical choice as there is a lot of familiarity of it within Automattic, which will hopefully mean more developers are interested in the e2e tests we are writing using Mocha/WebDriverJs.

There are some challenges with writing end-to-end tests with Mocha (mainly that Mocha tests are all independent so will continue to run if a previous step in the scenarios fails) so I haven’t completely ruled out investigating a move to Cucumber at some point for the e2e tests.

Justin Watts also asks…

Appium seems to be getting more unstable as time goes on. Early Grey caught my eye. Do you have any thoughts / outlooks on the mobile app testing landscape?

My response…

I must say I haven’t had a lot of recent experience with mobile UI automation tools. A couple of years back we started using Appium but abandoned this effort as the tests were so fragile we could never get green builds, so we focused on writing unit tests for the specific mobile code, testing webviews from webdriver, and doing human end to end testing on real devices. This worked well, and unless I worked on complex native apps I would probably end up doing something similar again should this arise.