Make sure your end-to-end tests align with your company’s strategy

I recently embarked on writing some new automated end-to-end tests for an existing product that has been around for some time but has never had e2e automated tests written for it.

Continue reading “Make sure your end-to-end tests align with your company’s strategy”

Should you close old bugs?

Do you actively close bugs because they reach a certain age?

One of the (many) things I love about Automattic is the attention that is given to bug triage. Bug triage is the habit of continually grooming our bug lists to ensure they are constantly relevant, updated and reflective of the current state of our products. A benefit of this is that an up-to-date and prioritized bug list translates directly into a backlog of maintenance work items for a product development team.

Continue reading “Should you close old bugs?”

(Not) Lying about Writing Code

I recently saw this quote in an article by Nikita Hasis on Medium.

“If Your Test Leaders Aren’t Telling You To Write Code, They Are Lying!
Even if it’s by omission.

There’s this argument, almost daily, about whether software testers should learn programming. I’ll jump right in. It is unimaginable that someone would tell you NOT to learn something. That’s the first, and probably shittiest lie that inexperienced testers get fed. It’s further unimaginable, and downright irresponsible to tell people not to learn something that is very clearly where a large, well-paying, and above all interesting part of the industry is heading. Wanna work on innovative, data-driven projects with smart and driven people? You probably need to pull up terminal and at least get your toes wet, y’all.

The worst part of the lie is that it imposes that coding is a difficult grind and will only cause more problems than it solves. I even saw Alister Scott’s blog post referenced as an argument against coding, ironic as it is.”

~ Nikita Hasis (Medium)

Since Medium is a walled garden that doesn’t allow you to leave a comment without creating an account I’ll leave my response here instead (where anyone is free to comment however they like).

Continue reading “(Not) Lying about Writing Code”

Should you raise bugs you don’t know how to reproduce?

At Automattic we’re always dogfooding; we’re always testing as we develop and therefore we’re always finding bugs.

When I come across one of these bugs I try to capture as much as I can about what I was doing at the time when it happened, and also capture some evidence (a screenshot, visual recording, error messages, browser console) of it happening. But even then on occasion I can’t work out how to reproduce a bug. 

This doesn’t mean the bug isn’t reproducible; it’s just that because there’s so many factors in the highly-distributed, multi-level systems that we develop and test, this means I don’t know how to reproduce these exact circumstances, therefore I don’t know how to reproduce this bug.

The exact set of circumstances that may cause a bug are never entirely reproducible; just like you can never live the exact same moment of your life ever again.

So, when this bug occurred, perhaps the server was under a particular level of load at that exact moment in time. Or one of the load-balancers was restarting, or maybe even my home broadband speed was affected by my children streaming The Octonauts in the next room.

So it can seem almost like a miracle when you do know how to reproduce a bug.

But what do you do with all of those bugs that you haven’t been able to reproduce?

Continue reading “Should you raise bugs you don’t know how to reproduce?”

AMA: automation testing channel

stevenguyen87 asks…

Could you share us some automation testing channel that could help up update the news of testing trend also improve ourself for a better technical skill and problem solved

My response…

There’s an awesome blog/channel, right here on WordPress.com, that meets your needs perfectly, it’s called Five Blogs. So make sure you check it out and you can follow it for great frequent updates.

AMA: Cross-browser Testing

Marisa Roman asks…

I have been testing web apps for over ten years, and making cross-browser testing “suck less” has been and still is a top goal of mine. I recognize that visual presentation/layout must be reviewed by human eyes, but given the growing number of OS/device/browser combinations we need to support/test, I feel like I’m missing an opportunity to streamline things every time I spin up a dozen VMs to check a new page.

Here’s what I do currently, using an online tool that provides access to various OS/device/browser combinations
1. I spin up a VM for an OS/device/browser combo I’m checking and check the page
2. Repeat step 1 for each combo I need to check

I have done a little bit of research on the tool’s APIs and I think I could at least automate the process of spinning up each combination I need.

I have also tried tools that purport to be able to play back your recorded Selenium IDE steps in whichever configurations you choose, but it didn’t work very well even if I took the time to update the recorded steps to use reliable locators.

Also, while we do have automated smoke and regression suites using Selenium, I have not been exposed to or thought of an automated approach to checking page layout that doesn’t immediately seem like it would be awful to maintain (other than perhaps just recording screencasts while interacting with each page and having a human review them).

So: How do you approach cross-browser testing for new feature development and for regression purposes?

Thanks so much for your AMA and I hope you pick my question!

My response…

I’ll split the response into two parts: what I recommend for cross-browser regression testing, and what I recommend for cross-browser new feature testing.

Cross-browser Testing for Regression Purposes

I am still on the opinion that there’s little-to-no return on investment (ROI) in running automated functional regression tests across different browsers. My approach is to  typically understand what your most used customer browser is (most likely Chrome) and automate your e2e regression tests against that. I’m still of the opinion, even though tools like Selenium-WebDriver have multi browser support, that maintaining a suite of e2e tests that work consistently across multiple browsers is an onerous task. The one variant that that I do like to automatically test is different screen resolutions, as fully responsive web applications can functionally behave differently at different screen widths in the same browser. At WordPress.com, for example, we run our e2e tests against three screen sizes in Chrome (mobile, tablet, and desktop).

We also run automated visual comparison tests to ensure we don’t introduce unexpected variances in our interface design/appearance. These run in the same three sizes in a single browser (which happens to be Firefox for technical reasons). They have some dynamic content capability so if the layout of the page looks okay, but the content is slightly different, then they still pass. There still is an additional overhead in maintaining these in addition to our functional tests though.

Whilst automated e2e tests are great to cover key scenarios for regression purposes, I have found it also very useful to supplement this with continuous exploratory testing of existing functionality in real world use (dogfooding) in different browsers, different operating systems and on different devices. This picks up real human issues that our automated e2e and visual comparison tests don’t find.

We are huge believers in continuous dogfooding at WordPress.com to the extent that we recently built a Slack ‘testbot’ that suggests both a real user flow and a browser/OS to test that on for when you feel like testing something. For example:

alisterscott: I am looking for something to test
testbot: @alisterscott: Try creating a new post making sure you add some media in IE10

Cross-browser Testing for New Feature Testing

I don’t believe you can test all new features on all browsers (unless you have a really big team maybe). So you can either take a risk based approach (test the most used browsers first), or you can just mix it up and test different features in different browsers.

Sometimes there may be exceptions, I recently tested a upgraded version of our WYSIWYG editor and I wanted to be sure that this worked on various browsers – even upcoming ones which is what the new editor was adding support for.

As for how you get access to these browsers to test, I develop and test mostly on OSX, so I test in Firefox, Chrome, Chrome Canary, Safari and Safari Technology Preview on OSX.

Our WordPress.com admin interface Calypso only supports IE10 and Edge, so if I want to test in either of those, I use a freely, legally available Microsoft VM running in VirtualBox on OSX to test this. These VMs work really well.

I know of people who prefer a cross-browser testing service like Sauce Labs, CrossBrowserTesting, BrowserStack, browserling or many others, like you’ve mentioned in your question.

If you’re just after some quick and free screenshots of a public page, you can also use this Microsoft utility.

Summary

To summarise, cross-browser testing still sucks, but it’s still a thing we need to do, especially when we have diverse groups of users with different devices and browsers. There is a trend towards browser vendors fully embracing/adopting open browser/web standards so hopefully browser specific bugs, or quirks, will soon become a thing of the past. For example Microsoft Edge is a much nicer browser to develop for and test than previous Internet Explorer versions. One can only hope and pray.