Welcome to WatirMelon.Blog!

From today you may notice this blog has a new domain name: WatirMelon.Blog!

I am very excited to be one of the first blogs in the world with a .blog address, before the new .blog domain officially goes live on November 21. If you’re interested in a .blog address for your blog (on WordPress or otherwise), you can register your interest now in any .blog domain name via the get.blog site.

I will continue to own the old domain WatirMelon.com so any links using that domain will continue to function by redirecting to the new watirmelon.blog domain 😎

The Importance of Doing Customer Support

One of the many, many things I love about working at Automattic is the being part of a company that sees huge value in everyone spending time doing customer support, and acts on it:

“When you join full-time, you’ll do customer support for WordPress.com for your first three weeks and spend a week in support annually, for evermore, regardless of your position. We believe an early and ongoing connection with the people who use our products is irreplaceable.”

Automattic’s Work With Us Page

Every single bit of that statement is true: I did three full weeks when I joined last September, and I’ve already done an additional week this year.

Continue reading “The Importance of Doing Customer Support”

AMA: automation testing channel

stevenguyen87 asks…

Could you share us some automation testing channel that could help up update the news of testing trend also improve ourself for a better technical skill and problem solved

My response…

There’s an awesome blog/channel, right here on WordPress.com, that meets your needs perfectly, it’s called Five Blogs. So make sure you check it out and you can follow it for great frequent updates.

AMA: Cross-browser Testing

Marisa Roman asks…

I have been testing web apps for over ten years, and making cross-browser testing “suck less” has been and still is a top goal of mine. I recognize that visual presentation/layout must be reviewed by human eyes, but given the growing number of OS/device/browser combinations we need to support/test, I feel like I’m missing an opportunity to streamline things every time I spin up a dozen VMs to check a new page.

Here’s what I do currently, using an online tool that provides access to various OS/device/browser combinations
1. I spin up a VM for an OS/device/browser combo I’m checking and check the page
2. Repeat step 1 for each combo I need to check

I have done a little bit of research on the tool’s APIs and I think I could at least automate the process of spinning up each combination I need.

I have also tried tools that purport to be able to play back your recorded Selenium IDE steps in whichever configurations you choose, but it didn’t work very well even if I took the time to update the recorded steps to use reliable locators.

Also, while we do have automated smoke and regression suites using Selenium, I have not been exposed to or thought of an automated approach to checking page layout that doesn’t immediately seem like it would be awful to maintain (other than perhaps just recording screencasts while interacting with each page and having a human review them).

So: How do you approach cross-browser testing for new feature development and for regression purposes?

Thanks so much for your AMA and I hope you pick my question!

My response…

I’ll split the response into two parts: what I recommend for cross-browser regression testing, and what I recommend for cross-browser new feature testing.

Cross-browser Testing for Regression Purposes

I am still on the opinion that there’s little-to-no return on investment (ROI) in running automated functional regression tests across different browsers. My approach is to  typically understand what your most used customer browser is (most likely Chrome) and automate your e2e regression tests against that. I’m still of the opinion, even though tools like Selenium-WebDriver have multi browser support, that maintaining a suite of e2e tests that work consistently across multiple browsers is an onerous task. The one variant that that I do like to automatically test is different screen resolutions, as fully responsive web applications can functionally behave differently at different screen widths in the same browser. At WordPress.com, for example, we run our e2e tests against three screen sizes in Chrome (mobile, tablet, and desktop).

We also run automated visual comparison tests to ensure we don’t introduce unexpected variances in our interface design/appearance. These run in the same three sizes in a single browser (which happens to be Firefox for technical reasons). They have some dynamic content capability so if the layout of the page looks okay, but the content is slightly different, then they still pass. There still is an additional overhead in maintaining these in addition to our functional tests though.

Whilst automated e2e tests are great to cover key scenarios for regression purposes, I have found it also very useful to supplement this with continuous exploratory testing of existing functionality in real world use (dogfooding) in different browsers, different operating systems and on different devices. This picks up real human issues that our automated e2e and visual comparison tests don’t find.

We are huge believers in continuous dogfooding at WordPress.com to the extent that we recently built a Slack ‘testbot’ that suggests both a real user flow and a browser/OS to test that on for when you feel like testing something. For example:

alisterscott: I am looking for something to test
testbot: @alisterscott: Try creating a new post making sure you add some media in IE10

Cross-browser Testing for New Feature Testing

I don’t believe you can test all new features on all browsers (unless you have a really big team maybe). So you can either take a risk based approach (test the most used browsers first), or you can just mix it up and test different features in different browsers.

Sometimes there may be exceptions, I recently tested a upgraded version of our WYSIWYG editor and I wanted to be sure that this worked on various browsers – even upcoming ones which is what the new editor was adding support for.

As for how you get access to these browsers to test, I develop and test mostly on OSX, so I test in Firefox, Chrome, Chrome Canary, Safari and Safari Technology Preview on OSX.

Our WordPress.com admin interface Calypso only supports IE10 and Edge, so if I want to test in either of those, I use a freely, legally available Microsoft VM running in VirtualBox on OSX to test this. These VMs work really well.

I know of people who prefer a cross-browser testing service like Sauce Labs, CrossBrowserTesting, BrowserStack, browserling or many others, like you’ve mentioned in your question.

If you’re just after some quick and free screenshots of a public page, you can also use this Microsoft utility.


To summarise, cross-browser testing still sucks, but it’s still a thing we need to do, especially when we have diverse groups of users with different devices and browsers. There is a trend towards browser vendors fully embracing/adopting open browser/web standards so hopefully browser specific bugs, or quirks, will soon become a thing of the past. For example Microsoft Edge is a much nicer browser to develop for and test than previous Internet Explorer versions. One can only hope and pray.

AMA: running e2e tests continually

Clark Hamilton asks…

What do you think about running e2e GUI tests continually and displaying the results on a real-time dashboard? We could build such a system but maybe there is an open-source solution we don’t know about. We’re using Mocha with webdriverio and found your articles useful. Appreciate your thoughts!

My response…

We currently do something like this for WordPress.com. We have our e2e tests (now open source) which are run in three different circumstances:

  1. on every deploy to WordPress.com (tens of times per day); and
  2. every 6 hours from UTC 0:00 (to pick up issues like connectivity that happen independent of deployments); and
  3. every time a change to the e2e tests is merged into the master branch.

We do this using CircleCI which connects to Github to perform #3 automatically. With #1 and #2 we use the CircelCI API to trigger builds on the master branch during our deploy process and every 6 hours using a cron job.

As for a dashboard, we use the CircleCI results page which shows the history of each run, and we also have some slack channels set up which also get real time notifications of results, and any tests that fail (including screenshots). The way we do this is in our source code.

Since we’re a 100% distributed company with no offices we don’t have any build lights/monitors etc. for an office space, we just individually use our own channels to monitor these test results.

WordPress.com e2e Automated Tests Now Open Source

I am very pleased to announce that all of our e2e tests for the WordPress.com platform are open source as of this morning. This is following in the footsteps of the WordPress.com Calypso front-end which is also open source.

I am continually reminded of how fortunate I am to work at Automattic who takes pride in its commitment to Default to Open:

“The quality of our code and our product depend on the amount of feedback we get and on the amount of people who use them. If we’re developing behind closed doors, we are putting artificial limits to both.

We have done our best work in the open, let’s continue working this way.”

~ Matt Mullenweg, CEO Automattic

Our ongoing development of these tests is in the open. So please feel free to take a look through our e2e tests, their CI results, and fork them, provide us some feedback, or use some of our ideas. And pull requests are always welcome 😀