Checking web element styles using WebDriverJs

I try to avoid incorporating any or layout/style based checks or locators into my automated end to end tests since these typically change more often leading to a higher test maintenance burden.

But I did have a circumstance recently where I wanted to check that a change I dynamically made to a page was reflected in the resultant web element’s style.

Continue reading “Checking web element styles using WebDriverJs”

Checking an image is actually visible in WebDriverJs

I recently discovered a gap in one of my e2e automated tests where I was checking the existence of an uploaded image in the DOM, but not that the image was actually displayed.

driver.isElementPresent( By.css( `img[alt='upload.jpg']` ) ).then( function( present ) {
  assert.equal( present, true, 'Image not displayed' );
} );

If the DOM has a reference to the image, but it isn’t actually rendered this test will pass. This isn’t ideal.

I remembered my post about how to check that an image is actually rendered using WebDriver in C# and so I used the same JavaScript script which WebDriverJs sends to the driver:

driver.findElement( By.css( `img[alt='upload.jpg']` ) ).then( function( element ) {
  driver.executeScript( 'return (typeof arguments[0].naturalWidth!=\"undefined\" && arguments[0].naturalWidth>0)', element ).then( function( present ) {
    assert.equal( present, true, 'Image not displayed' );
  } );
} );

This works a treat. I’ve moved it into a helper function so I can use this anywhere without repeating it also.

Testing end-to-end with Mocha

As part of my excellent Excellence Wrangler role at Automattic, one of my key tasks has been establishing some end-to-end tests for WordPress.com using Mocha with WebDriverJs. Our testing pyramid doesn’t look much like a pyramid:

wordpress.com test pyramid.png

We’ve got lots of React unit tests at the bottom: these are to speed development.

We’re intentionally missing a middle: the REST API we consume has its own unit tests, we don’t need integration tests for it. We don’t have detailed full stack acceptance tests of our UI: these are too slow and brittle.

We have a handful of e2e flow tests at the top, these are to protect the user experience, we run these on every deployment and frequently in production. These can be brittle on such a fast moving code base, but we limit their number (depth) so they still give us good confidence everything is working well together but limiting our overhead.

So what do these end-to-end tests look like?

I hadn’t used Mocha before and I was used to writing end-to-end tests in Gherkin format in tools like Cucumber and Specflow so I initially began writing end-to-end tests that looked like this:

test.describe('WordPress.com Sign Up', function() {
  test.beforeEach(function() {
    driver.manage().deleteAllCookies();
  });

  test.it('Can Create A Free Blog', function() {
    var signupFlow = new SignUpFlow( driver, 'desktop' );
    signupFlow.createFreeBlog( 'en' );
  });

  test.it('Can Create A New Site With a Paid Domain Upgrade', function() {
    var signupFlow = new SignUpFlow( driver, 'desktop' );
    signupFlow.CreateSiteWithDomainPaidByCreditCard( 'en' );
  });
});

I was pushing the code down into flow classes which I have used before, but the issue with this was the output I was getting from Mocha wasn’t very rich:

WordPress.com Sign Up
      ✓ Can Create A Free Blog
      ✓ Can Create A New Site With a Paid Domain Upgrade

I then realized by looking at an end-to-end test written by another developer that you can nest describe and it statements to give you much more expressive end-to-end tests.

test.describe( `Sign Up (${screenSize})`, function() {

  test.describe( 'Free Site:', function() {
    test.before( 'Delete Cookies and Local Storage', function() {
      driverManager.clearCookiesAndDeleteLocalStorage( driver );
    } );

    test.describe( 'Sign up for a free site', function() {

      test.describe( 'Step One: Themes', function() {
        test.before( 'Can see the choose a theme page as the starting page', function() {
          this.chooseAThemePage = new ChooseAThemePage( driver, { visit: true } );
          return this.chooseAThemePage.displayed().then( ( displayed ) => {
            assert.equal( displayed, true, 'The choose a theme start page is not displayed' );
          } );
        } );

        test.it( 'Can select the first theme', function() {
          return this.chooseAThemePage.selectFirstTheme();
        } );
      } );

      test.describe( 'Step Two: Domains', function() {
        test.before( 'Can then see the domains page ', function() {
          this.findADomainComponent = new FindADomainComponent( driver );
          return this.findADomainComponent.displayed().then( ( displayed ) => {
            assert.equal( displayed, true, 'The choose a domain page is not displayed' );
          } );
        } );

        test.it( 'Can search for a blog name', function() {
          return this.findADomainComponent.searchForBlogNameAndWaitForResults( blogName );
        } );

        test.it( 'Can see a free WordPress.com blog address in results ', function() {
          return this.findADomainComponent.freeBlogAddress().then( ( actualAddress ) => {
            assert.equal( actualAddress, expectedBlogAddress, 'The expected free address is not shown' )
          } );
        } );

        test.it( 'Can select the free address', function() {
          return this.findADomainComponent.selectFreeAddress();
        } );
      } );

This gives us rich feedback:

Sign Up (desktop)
    Free Site:
      Sign up for a free site
        Step One: Themes
          ✓ Can see the choose a theme page as the starting page
          ✓ Can select the first theme
        Step Two: Domains
          ✓ Can then see the domains page
          ✓ Can search for a blog name
          ✓ Can see a free WordPress.com blog address in results
          ✓ Can select the free address

The mistake I had made which I didn’t realize was not creating enough nesting, instead of having Step One, Step Two etc. next to one another, they should be nested within each other. This is because if they’re next to one another, Mocha will run the Step Two, Step Three etc. even if Step One has failed, which is not what we want in an end-to-end scenario where each step is dependent on the previous one.

So, it now looks something like this:

test.describe( 'Free Site:', function() {
    test.before( 'Delete Cookies and Local Storage', function() {
      driverManager.clearCookiesAndDeleteLocalStorage( driver );
    } );

    test.describe( 'Sign up for a free site', function() {
      test.describe( 'Step One: Themes', function() {
        test.before( 'Can see the choose a theme page as the starting page', function() {
          this.chooseAThemePage = new ChooseAThemePage( driver, { visit: true } );
          return this.chooseAThemePage.displayed().then( ( displayed ) => {
            assert.equal( displayed, true, 'The choose a theme start page is not displayed' );
          } );
        } );

        test.it( 'Can select the first theme', function() {
          return this.chooseAThemePage.selectFirstTheme();
        } );

        test.describe( 'Step Two: Domains', function() {
          test.before( 'Can then see the domains page ', function() {
            this.findADomainComponent = new FindADomainComponent( driver );
            return this.findADomainComponent.displayed().then( ( displayed ) => {
              assert.equal( displayed, true, 'The choose a domain page is not displayed' );
            } );
          } );

          test.it( 'Can search for a blog name', function() {
            return this.findADomainComponent.searchForBlogNameAndWaitForResults( blogName );
          } );

          test.it( 'Can see a free WordPress.com blog address in results ', function() {
            return this.findADomainComponent.freeBlogAddress().then( ( actualAddress ) => {
              assert.equal( actualAddress, expectedBlogAddress, 'The expected free address is not shown' )
            } );
          } );

          test.it( 'Can select the free address', function() {
            return this.findADomainComponent.selectFreeAddress();
          } );

          test.describe( 'Step Three: Plans', function() {

which means the output is slightly different but still very useful:

Sign Up (mobile)
  Free Site:
    Sign up for a free site
      Step One: Themes
        ✓ Can select the first theme
        Step Two: Domains
          ✓ Can search for a blog name
          ✓ Can see a free WordPress.com blog address in results
          ✓ Can select the free address
          Step Three: Plans
            ✓ Can select the free plan

These tests are much better written this way. The only issue I am left facing with Mocha is when a before hook fails (such as logging in) the generic afterEach hook we have to take screenshots is not triggered (this is only triggered when an it block is run.

Waiting for AJAX calls in WebDriver C#

I was trying to work out how to wait for AJAX calls to complete in C# WebDriver before continuing a test.

Whilst I believe that your UI should visually indicate that AJAX activity is occurring (such as a spinner) and in this case you should be able to wait until such an indicator changes, if you don’t have a visual indicator and you use JQuery for your AJAX calls, you can use a JavaScript call to jQuery.active to determine if there are any active AJAX requests, and wait until this value is zero.

I wrapped this into a WebDriver extension method on Driver, so you can call it like this:

Driver.FindElement(By.Id("name")).Set("Alister");
Driver.WaitForAjax();
Driver.FindElement(By.Id("next")).Click();

The actual extension method looks like this:

public static void WaitForAjax(this IWebDriver driver, int timeoutSecs = 10, bool throwException=false)
{
  for (var i = 0; i < timeoutSecs; i++)
  {
    var ajaxIsComplete = (bool)(driver as IJavaScriptExecutor).ExecuteScript("return jQuery.active == 0");
    if (ajaxIsComplete) return;
    Thread.Sleep(1000);
  }
  if (throwException)
  {
    throw new Exception("WebDriver timed out waiting for AJAX call to complete");
  }
}

I hope you find this helpful if you’re ever in the same situation.

Five automated acceptance test anti-patterns

Whilst being involved with lots of people writing automated acceptance tests using tools like SpecFlow and WebDriver I’ve seen some ‘anti-patterns’ emerge that can make these tests non-deterministic (flaky), very fragile to change and less efficient to run.

Here’s five ‘anti-patterns’ I’ve seen and what you can do instead.

Anti-pattern One: Not using page-objects

Page objects are just a design pattern to ensure automated UI tests use reusable, modular code. Not using them, eg, writing WebDriver code directly in step definitions, means any changes to your UI will require updates in lots of different places instead of the one ‘page’ class.

Bad

[When(@"I buy some '(.*)' tea")]
public void WhenIBuySomeTea(string typeOfTea)
{
Driver.FindElement(By.Id("tea-"+typeOfTea)).Click();
Driver.FindElement(By.Id("buy")).Click();
}

Better

[When(@"I buy some '(.*)' tea")]
public void WhenIBuySomeTea(string typeOfTea)
{
     MenuPage.BuyTea(typeOfTea);
}

Complicated set up scenarios within the tests themselves

Whilst there’s a place for automated end-to-end scenarios (I call these user journies), I prefer most acceptance tests to jump straight to the point.

Bad

Scenario: Accept Visa and Mastercard for Australia
 Given I am on the home page for Australia
 And I choose the tea menu
 And I select some 'green tea'
 And I add the tea to my basket
 And I choose to checkout
 Then I should see 'visa' is accepted
 And I should see 'mastercard' is accepted

Better

This usually requires adding some special functionality to your app, but the ability for testing to ‘jump’ to certain pages with data automatically set up makes automated tests much easier to read and maintain.

Scenario: Accept Visa and Mastercard for Australia
 Given I am the checkout page for Australia
 Then I should see 'visa' is accepted
 And I should see 'mastercard' is accepted

Using complicated x-path or CSS selectors

Using element identification selectors that have long chains from the DOM in them leads to fragile tests, as any change to that chain in the DOM will break your tests.

Bad

private static readonly By TeaTypeSelector =
            By.CssSelector(
                "#input-tea-type > div > div.TeaSearchRow > div.TeaSearchCell.no > div:nth-child(2) > label");

Better

Identify by ‘id’ (unique) or ‘class’. If there’s multiple elements in a group, create a parent container and iterate through them.

private static readonly By TeaTypeSelector = By.Id("teaType");

Directly executing JavaScript

Since WebDriver can directly execute any arbitrary JavaScript, it can be tempting to bypass DOM manipulation and just run the JavaScript.

Bad

public void RemoveTea(string teaType)
{
  (driver as IJavaScriptExecutor).ExecuteScript(string.Format("viewModel.tea.types.removeTeaType(\"{0}\");", teaType));
  }

Better

It is much better to let the WebDriver control the browser elements which should fire the correct JavaScript events and call the JavaScript, as that way you avoid having your ‘test’ JavaScript in sync to your ‘real’ JavaScript.

public void RemoveTea(string teaType)
{
  driver.FindElement(By.Id("remove-"+teaType)).Click();
}

Embedding implementation detail in your features/scenarios

Acceptance test scenarios are meant to convey intention over implementation. If you start seeing things like URLs in your test scenarios you’re focusing on implementation.

Bad


 Scenario: Social media links displayed on checkout page
   Given I am the checkout page for Australia
   Then I should see a link to 'http://twitter.com/beautifultea'
   And I should see a link to 'https://facebook.com/beautifultea'
 

Better

Hide implementation detail in the steps (or pages, or config) and make your scenarios about the test intention.


 Scenario: Social media links displayed on checkout page
   Given I am the checkout page for Australia
   Then I should see a link to the Beautiful Tea Twitter account
   And I should see a link to the Beautiful Tea Facebook page
 

I hope you’ve enjoyed these anti-patterns. Leave a comment below if you have any of your own.

100,000 e2e selenium tests? Sounds like a nightmare!

This story begins with a promo email I received from Sauce Labs…

“Ever wondered how an Enterprise company like Salesforce runs their QA tests? Learn about Salesforce’s inventory of 100,000 Selenium tests, how they run them at scale, and how to architect your test harness for success”

saucelabs email

100,000 end-to-end selenium tests and success in the same sentence? WTF? Sounds like a nightmare to me!

I dug further and got burnt by the molten lava: the slides confirmed my nightmare was indeed real:

Salesforce Selenium Slide

“We test end to end on almost every action.”

Ouch! (and yes, that is an uncredited image from my blog used in the completely wrong context)

But it gets worse. Salesforce have 7500 unique end-to-end WebDriver tests which are run on 10 browsers (IE6, IE7, IE8, IE9, IE10, IE11, Chrome, Firefox, Safari & PhantomJS) on 50,000 client VMs that cost multiple millions of dollars, totaling 1 million browser tests executed per day (which equals 20 selenium tests per day, per machine, or over 1 hour to execute each test).

Salesforce UI Testing Portfolio

My head explodes! (and yes, another uncredited image from this blog used out of context and with my title removed).

But surely that’s only one place right? Not everyone does this?

A few weeks later I watched David Heinemeier Hansson say this:

“We recently had a really bad bug in Basecamp where we actually lost some data for real customers and it was incredibly well tested at the unit level, and all the tests passed, and we still lost data. How the f*#% did this happen? It happened because we were so focused on driving our design from the unit test level we didn’t have any system tests for this particular thing.
…And after that, we sort of thought, wait a minute, all these unit tests are just focusing on these core objects in the system, these individual unit pieces, it doesn’t say anything about whether the whole system works.”

~ David Heinemeier Hansson – Ruby on Rails creator

and read that he had written this:

“…layered on top is currently a set of controller tests, but I’d much rather replace those with even higher level system tests through Capybara or similar. I think that’s the direction we’re heading. Less emphasis on unit tests, because we’re no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests (Which btw do not need to be so slow any more, thanks to advances in parallelization and cloud runner infrastructure).”

~ David Heinemeier Hansson – Ruby on Rails creator

I started to get very worried. David is the creator of Ruby on Rails and very well respected within the ruby community (despite being known to be very provocative and anti-intellectual: the ‘Fox News’ of the ruby world).

But here is dhh telling us to replace lower level tests with higher level ‘system’ (end to end) tests that use something like Capybara to drive a browser because unit tests didn’t find a bug and because it’s now possible to parallelize these ‘slow’ tests? Seriously?

Speed has always seen as the Achille’s heel of end to end tests because everyone knows that fast feedback is good. But parallelization solves this right? We just need 50,000 VMs like Salesforce?

No.

Firstly, parallelization of end to end tests actually introduces its own problems, such as what to do with tests that you can’t run in parallel (for example, ones that change global state of a system such as a system message that appears to all users), and it definitely makes test data management trickier. You’ll be surprised the first time you run an existing suite of sequential e2e tests in parallel, as a lot will fail for unknown reasons.

Secondly, the test feedback to someone who’s made a change still isn’t fast enough to enable confidence in making a change (by the time your app has been deployed and the parallel end-to-end tests have run; the person who made the change has most likely moved onto something else).

But the real problem with end to end tests isn’t actually speed. The real problem with end to end tests is that when end to end tests fail, most of the time you have no idea what went wrong so you spend a lot of time trying to find out why. Was it the server? Was it the deployment? Was it the data? Was it the actual test? Maybe a browser update that broke Selenium? Was the test flaky (non-deterministic or non-hermetic)?

Rachel Laycock and Chirag Doshi from ThoughtWorks explain this really well in their recent post on broken UI tests:

“…unlike unit tests, the functional tests don’t tell you what is broken or where to locate the failure in the code base. They just tell you something is broken. That something could be the test, the browser, or a race condition. There is no way to tell because functional tests, by definition of being end-to-end, test everything.”

So what’s the answer? You have David’s FUD about unit testing not catching a major bug in BaseCamp. On the other hand you need to face the issue of having a large suite of end to end tests will most likely result in you spending all your time investigating test failures instead of delivering new features quickly.

If I had to choose just one, I would definitely choose a comprehensive suite of automated unit tests over a comprehensive suite of end-to-end/system tests any day of the week.

Why? Because it’s much easier to supplement comprehensive unit testing with human exploratory end-to-end system testing (and you should anyway!) than trying to manually verify units function from the higher system level, and it’s much easier to know why a unit test is broken as explained above. And it’s also much easier to add automated end-to-end tests later than trying to retrofit unit tests later (because your code probably won’t be testable and making it testable after-the-fact can introduce bugs).

To answer our question, let’s imagine for a minute that you were responsible for designing and building a new plane. You obviously need to test that your new plane works. You build a plane by creating parts (units), putting these together into components, and then putting all the components together to build the (hopefully) working plane (system).

If you only focused on unit tests, like David mentioned in his Basecamp example, you could be pretty confident that each piece of the plane would be have been tested well and works correctly, but wouldn’t be confident it would fly!

If you only focussed on end to end tests, you’d need to fly the plane to check the individual units and components actually work (which is expensive and slow), and even then, if/when it crashed, you’d need to examine the black-box to hopefully understand which unit or component didn’t work, as we currently do when end-to-end tests fail.

But, obviously we don’t need to choose just one. And that’s exactly what Airbus does when it’s designing and building the new Airbus A350:

As with any new plane, the early design phases were riddled with uncertainty. Would the materials be light enough and strong enough? Would the components perform as Airbus desired? Would parts fit together? Would it fly the way simulations predicted? To produce a working aircraft, Airbus had to systematically eliminate those risks using a process it calls a “testing pyramid.” The fat end of the pyramid represents the beginning, when everything is unknown. By testing materials, then components, then systems, then the aircraft as a whole, ever-greater levels of complexity can be tamed. “The idea is to answer the big questions early and the little questions later,” says Stefan Schaffrath, Airbus’s vice president for media relations.

The answer, which has been the answer all along, is to have a balanced set of automated tests across all levels, with a disciplined approach to having a larger number of smaller specific automated unit/component tests and a smaller number of larger general end-to-end automated tests to ensure all the units and components work together. (My diagram below with attribution)

Automated Testing Pyramid

Having just one level of tests, as shown by the stories above, doesn’t work (but if it did I would rather automated unit tests). Just like having a diet of just chocolate doesn’t work, nor does a diet that deprives you of anything sweet or enjoyable (but if I had to choose I would rather a diet of healthy food only than a diet of just chocolate).

Now if we could just convince Salesforce to be more like Airbus and not fly a complete plane (or 50,000 planes) to test everything every-time they make a change and stop David from continuing on his anti-unit pro-system testing anti-intellectual rampage which will result in more damage to our industry than it’s worth.

Do you REALLY need to run your WebDriver tests in IE?

I recently read that Microsoft are now on board to officially support Selenium WebDriver from Internet Explorer (IE) 11+

Whilst I welcome the news, I try to avoid running WebDriver tests in Internet Explorer completely for the following reasons:

  • Internet Explorer is a very non-testable browser. Whilst everyone agrees testability of your app is paramount, testability of its run-time container, the browser, is equally important. Settings such as security zones, proxies and auto-complete in IE must be manually configured on each machine instead of being programmatically specified by profiles in Firefox and Chrome; and
  • Because IE has historically been so hard to test, WebDriver’s support for IE is much less mature and much less stable and efficient than Firefox and Chrome

The only way automated UI tests can succeed (and the chances of success aren’t high to begin with), is if they are fast and consistent. WebDriver against IE is neither (I see it more of a problem with IE than WebDriver). So if you want to use WebDriver, don’t test against IE, test against Firefox or Chrome.

But, In my role as a consultant, I continually hear managers say that we must run our WebDriver automated tests in Internet Explorer. There’s usually one or two reasons given:

  1. Our web app is for internal staff only and our only supported browser is IE (which is usually IE8); and/or
  2. Our web app (or the one we pay for) has been specifically coded to work only in IE and therefore it’s not possible to test in another browser.

You need to explain that your WebDriver automated tests aren’t the only tests you’ll run against your app. In a corporate environment (such as those who only support IE8), chances are you’ll have a period of business acceptance testing or user acceptance testing. This will be conducted by users in the browser they use, so this straight away mitigates the risk of only running your automated tests against a non-IE browser.

From my experience testing many applications against older versions of IE, the one thing that doesn’t work well (and causes web apps to break) is not the HTML but JavaScript support. If your app contains a decent amount of JavaScript you could write some JavaScript tests in a tool like js-test-driver and run these automatically against older versions of IE automatically. That way you can be assured your JavaScript is working without having to deal with IE/WebDriver issues (and slow running tests).

As for applications specifically coded to work in IE. Web standards exist for a reason and in my opinion it’s crazy to develop a web app that is tied to the implementation of a browser by a single vendor. Microsoft made IE11 purposely report itself to a web server as not being IE so Microsoft can avoid this exact situation happening in the future.

Chances are if your app is hard-coded to only work in IE then it won’t work in IE11 anyway. If it works in IE11, then it’ll work in Chrome and Firefox as they all follow web standards, and you can run your WebDriver tests reliably now.

I believe you’re better off not having any automated UI tests if you there’s a mandate in place that you must run them against IE. If you can’t automatically test your app in Firefox or Chrome, I believe you’re better off spending your time manually testing your app in IE than trying to maintain a test suite that will never be efficient or reliable.