AMA: Cross-browser Testing

Marisa Roman asks…

I have been testing web apps for over ten years, and making cross-browser testing “suck less” has been and still is a top goal of mine. I recognize that visual presentation/layout must be reviewed by human eyes, but given the growing number of OS/device/browser combinations we need to support/test, I feel like I’m missing an opportunity to streamline things every time I spin up a dozen VMs to check a new page.

Here’s what I do currently, using an online tool that provides access to various OS/device/browser combinations
1. I spin up a VM for an OS/device/browser combo I’m checking and check the page
2. Repeat step 1 for each combo I need to check

I have done a little bit of research on the tool’s APIs and I think I could at least automate the process of spinning up each combination I need.

I have also tried tools that purport to be able to play back your recorded Selenium IDE steps in whichever configurations you choose, but it didn’t work very well even if I took the time to update the recorded steps to use reliable locators.

Also, while we do have automated smoke and regression suites using Selenium, I have not been exposed to or thought of an automated approach to checking page layout that doesn’t immediately seem like it would be awful to maintain (other than perhaps just recording screencasts while interacting with each page and having a human review them).

So: How do you approach cross-browser testing for new feature development and for regression purposes?

Thanks so much for your AMA and I hope you pick my question!

My response…

I’ll split the response into two parts: what I recommend for cross-browser regression testing, and what I recommend for cross-browser new feature testing.

Cross-browser Testing for Regression Purposes

I am still on the opinion that there’s little-to-no return on investment (ROI) in running automated functional regression tests across different browsers. My approach is to  typically understand what your most used customer browser is (most likely Chrome) and automate your e2e regression tests against that. I’m still of the opinion, even though tools like Selenium-WebDriver have multi browser support, that maintaining a suite of e2e tests that work consistently across multiple browsers is an onerous task. The one variant that that I do like to automatically test is different screen resolutions, as fully responsive web applications can functionally behave differently at different screen widths in the same browser. At, for example, we run our e2e tests against three screen sizes in Chrome (mobile, tablet, and desktop).

We also run automated visual comparison tests to ensure we don’t introduce unexpected variances in our interface design/appearance. These run in the same three sizes in a single browser (which happens to be Firefox for technical reasons). They have some dynamic content capability so if the layout of the page looks okay, but the content is slightly different, then they still pass. There still is an additional overhead in maintaining these in addition to our functional tests though.

Whilst automated e2e tests are great to cover key scenarios for regression purposes, I have found it also very useful to supplement this with continuous exploratory testing of existing functionality in real world use (dogfooding) in different browsers, different operating systems and on different devices. This picks up real human issues that our automated e2e and visual comparison tests don’t find.

We are huge believers in continuous dogfooding at to the extent that we recently built a Slack ‘testbot’ that suggests both a real user flow and a browser/OS to test that on for when you feel like testing something. For example:

alisterscott: I am looking for something to test
testbot: @alisterscott: Try creating a new post making sure you add some media in IE10

Cross-browser Testing for New Feature Testing

I don’t believe you can test all new features on all browsers (unless you have a really big team maybe). So you can either take a risk based approach (test the most used browsers first), or you can just mix it up and test different features in different browsers.

Sometimes there may be exceptions, I recently tested a upgraded version of our WYSIWYG editor and I wanted to be sure that this worked on various browsers – even upcoming ones which is what the new editor was adding support for.

As for how you get access to these browsers to test, I develop and test mostly on OSX, so I test in Firefox, Chrome, Chrome Canary, Safari and Safari Technology Preview on OSX.

Our admin interface Calypso only supports IE10 and Edge, so if I want to test in either of those, I use a freely, legally available Microsoft VM running in VirtualBox on OSX to test this. These VMs work really well.

I know of people who prefer a cross-browser testing service like Sauce Labs, CrossBrowserTesting, BrowserStack, browserling or many others, like you’ve mentioned in your question.

If you’re just after some quick and free screenshots of a public page, you can also use this Microsoft utility.


To summarise, cross-browser testing still sucks, but it’s still a thing we need to do, especially when we have diverse groups of users with different devices and browsers. There is a trend towards browser vendors fully embracing/adopting open browser/web standards so hopefully browser specific bugs, or quirks, will soon become a thing of the past. For example Microsoft Edge is a much nicer browser to develop for and test than previous Internet Explorer versions. One can only hope and pray.

Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.

1 thought on “AMA: Cross-browser Testing”

  1. This is an excellent article and contains a lot of points that resonate with me as a software test manager. Thank you for an article that focuses on a pragmatic approach to testing.

    Liked by 1 person

Comments are closed.