Avoiding LGTM PR Cultures

Introduction

Making a code change when using a distributed version control system (DVCS) like Git is usually done by packaging a change on a branch as a “pull request” (PR) which indicates the author would like the project to “pull” the change into it.

This was, and is, a key part of open source projects as it allows outside contributors to contribute to a project in a controlled way, however many internal software development teams also work in this fashion as there are many benefits of this approach over committing directly to a shared branch or trunk.

I’ve seen the pull request approach have a positive impact on software quality since pull requests facilitate discussion through peer reviews and allow running of automated tests against every commit and change that is proposed to be merged into the default branch.

taken lgtm.jpg

What is a LGTM PR culture?

I’ve also seen some negative behaviours emerge when moving to pull request based development which I’ll call a LGTM PR culture.

LGTM is a common acronym found in peer reviews of pull requests which means “Looks Good To Me”, and I’ve seen teams let unsuitable changes through with LGTM comments without doing solid peer reviews and testing.

How do you know if you have a LGTM PR culture?

One way to “test” your peer review process is by creating PRs and leaving a subtle bug or something not quite right that you know about in the PR. When it gets reviewed do you get a LGTM? I did this recently and whilst the PR didn’t even do what it was meant to do I received a LGTM 😕

2dsag2

How can you move away from a LGTM PR culture?

It’s tempting to just tell everyone to do better peer reviews but it’s not that easy!

I’ve found there’s some steps that an author of a pull request can do to facilitate better pull request reviews and lead towards a better culture.

1. Make pull requests as small as possible:

The smaller the pull request the more likely you’ll get specific and broad feedback on it – and you can then iterate on that feedback. A 500 line change is daunting to review and will lead to more LGTMs. For larger refactorings where there’ll be lots of lines changed, you can start with a small focussed change and get lots of review and discussion, and once the refactoring is established with a smaller example you can easily apply that feedback to a broader impact PR that won’t need as much feedback as you’ve established the new pattern previously.

2. Review your own pull request

Review your own work. This works best if you do something else then come back to it with a fresh mind. Anything you’re unsure about or doesn’t look right leave it as a comment on your own PR to encourage other reviewers to look closely at those areas also.

3. Include clear instructions for reviewing and testing your pull request

A list of test steps is good as well as asking for what type of feedback you’d like – you can explicitly ask reviewers something like “please leave a comment after your review listing what you tested and what areas of the code you reviewed.” This discourages shortcuts and LGTMs.

4. Test your peer review process – see above.

Conclusion

Developing software using pull requests can mean much higher quality code and less technical debt due to the feedback on peer reviews that accompany pull requests. As an author you can take steps to ensuring pull requests are easy to review and encourage a culture of effective peer reviews.

npm ci

I recently discovered npm ci which you can use instead of npm install when running a node project on continuous integration (CI) system and want to install your npm dependencies. It does this in a more lightweight, more CI friendly way.

If you use npm test to run your tests, this can be shortened to npm t (much like npm i is npm install), and therefore you can run npm cit to install dependencies and run tests in CI.

Running Mocha e2e Tests in Parallel

I recently highlighted the importance of e2e test design. Once you have well designed e2e tests you can start running them in parallel.

There are a couple of approaches to scaling your tests out to be run in parallel:

  1. Running the tests in multiple machine processes;
  2. Running the tests across multiple (virtual) machines;

These aren’t mutually exclusive, you can run tests in parallel processes across multiple virtual machines – we do this at Automattic – each test run happens across two virtual machines, Docker containers on CircleCI, each of which runs six processes for either desktop or mobile responsive browser sizes depending on the container.

I have found running tests in multiple processes gives best bang for buck since you don’t need additional machines (most build systems charge based on container utilisation) and you’ll benefit from parallel runs on a local machine.

We write our e2e tests in WebDriverJs and use Mocha for our test framework. We currently use a tool called Magellan to run our e2e tests in separate processes (workers), however the tool is losing Mocha support and therefore we need to look at alternatives.

I have found that mocha-parallel-tests seems like the best replacement – it’s a drop in replacement runner for mocha tests which splits test specification files across processes available on your machine you’re executing your tests on – you can also specify a maximum limit of processes as the command line argument --max-parallel

There is another parallel test runner for mocha: mocha.parallel – but this requires updating all your specs to use a different API to allow the parallelisation to work. I like mocha-parallel-tests as it just works.

I’ve updated my webdriver-js-demo project to use mocha-parallel-tests – feel free to check it out here.

Running e2e Tests in Parallel

One of the best ways to speed up your end-to-end (e2e) tests is to start running them in parallel.

The main issue I see that prevents teams from fully using parallelism for their e2e tests is lack of test design. Without adequately designed e2e tests – which have been designed to be run in parallel – parallelism can introduce non-deterministic and inconsistent test results – leading to frustration and low-confidence in the e2e tests.

This is often the case when teams go about directly converting manual test cases into automated e2e tests – instead of approaching e2e test automation with a specific end-to-end design focus.

Say you had a manual test case for inviting someone to view your WordPress blog:

  1. Enter the email address of the person you’d like to follow your site
  2. Check the list shows a pending invite
  3. Check your email inbox shows a pending invite
  4. Open the email, follow the link and sign up for a new account

When you’re manually testing this in sequence it’s easy to follow – but as soon as you start running this in parallel, across different builds, and with other tests things will most likely start failing.

Why? The test isn’t specific enough – you may have multiple pending invites – so how do you know which one is which? You can only invite someone once, how do you generate new invite emails? You may receive multiple emails at any one time – which one is which? And more.

With appropriate e2e test design you can write the e2e test to be consistent when run regardless of parallelism:

  1. Email addresses are uniquely generated for each test run using a GUID and either a test email API like Mailosaur or Gmail plus addressing; and
  2. The pending email list has a data attribute on each invite clearly identifying which email the invite is for and this is used to verify pending email status; and
  3. The inbox is filtered by the expected GUID, and only those emails are used. Etc.

Once you have good e2e test design in place you’re able to look at how to speed up e2e test execution using parallelism. I’ll cover how to do this in my next blog post.

Bailing with Mocha e2e Tests

At Automattic we use Mocha to write our end-to-end (e2e) automated tests in JavaScript/Node.js. One issue with Mocha is that it’s not really a tool suited to writing e2e tests where one test step can rely on a previous test step – for example our sign up process is a series of pages/steps which rely on the previous step passing. Mocha is primarily a unit testing tool and it’s bad practice for one unit test to depend on another, so that is why Mocha doesn’t support this.

A more simplified example of this is shown in my webdriver-js-demo project:

describe( 'Ralph Says', function() {
	this.timeout( mochaTimeoutMS );

	before( async function() {
		const builder = new webdriver.Builder().withCapabilities( webdriver.Capabilities.chrome() );
		driver = await builder.build();
	} );

	it( 'Visit the page', async function() {
		page = await RalphSaysPage.Visit( driver );
	} );

	it( 'shows a quote container', async function() {
		assert( await page.quoteContainerPresent(), 'Quote container not displayed' );
	} );

	it( 'shows a non-empty quote', async function() {
		assert.notEqual( await page.quoteTextDisplayed(), '', 'Quote is empty' );
	} );

	afterEach( async function() { await driver.manage().deleteAllCookies(); } );

	after( async function() { await driver.quit(); } );
} );

Continue reading “Bailing with Mocha e2e Tests”

Using async/await with WebDriverJs

We’ve been using WebDriverJs for a number of years and the control flow promise manager that it offers to make writing WebDriverJs commands in a synchronous blocking way a bit easier, particularly when using promises.

The problem with the promise manager is that it is hard to understand its magic as sometimes it just works, and other times it was very confusing and not very predictable. It was also harder to develop and support by the Selenium project so it’s being deprecated later this year.

Fortunately recent versions of Node.js support asynchronous functions and use of the await command which makes writing WebDriverJs tests so much easier and understandable.

I’ve recently updated my WebDriverJs demo project to use async/await so I’ll use that project as examples to explain what is involved.

WebDriverJs would allow you to write consecutive statements like this without worrying about waiting for each statement to finish – note the use of test.it instead of the usual mocha it function:

test.it( 'can wait for an element to appear', function() {
	const page = new WebDriverJsDemoPage( driver, true );
	page.waitForChildElementToAppear();
	page.childElementPresent().then( ( present ) => {
		assert( present, 'The child element is not present' );
	} );
} );

When you were waiting on the return value from a promise you could use a .then function to wait for the value as shown above.

This is quite a simple example and this could get complicated pretty quickly.

Since the promise manager is being removed, we need to update our tests so they continue to execute in the correct order. We can make the test function asynchronous by adding the async prefix, remove the test. prefix on the it block, and add await statements every time we expect a statement to finish before continuing:

it( 'can wait for an element to appear', async function() {
	const page = new WebDriverJsDemoPage( driver, true );
	await page.waitForChildElementToAppear();
	assert( await page.childElementPresent(), 'The child element is not present' );
} );

I personally find this much easier to read and understand, less ‘magic’, but the one bit that stands out is visiting the page and creating the new page object. The code in the constructor for this page, and other pages, is asynchronous as well, however we can’t have an async constructor!

export default class BasePage {
	constructor( driver, expectedElementSelector, visit = false, url = null ) {
		this.explicitWaitMS = config.get( 'explicitWaitMS' );
		this.driver = driver;
		this.expectedElementSelector = expectedElementSelector;
		this.url = url;

		if ( visit ) this.driver.get( this.url );

		this.driver.wait( until.elementLocated( this.expectedElementSelector ), this.explicitWaitMS );
	}
}

How we can get around this is to define a static async function that acts as a constructor and returns our new page object for us.

So, our BasePage now looks like:

export default class BasePage {
	constructor( driver, expectedElementSelector, url = null ) {
		this.explicitWaitMS = config.get( 'explicitWaitMS' );
		this.driver = driver;
		this.expectedElementSelector = expectedElementSelector;
		this.url = url;
	}

	static async Expect( driver ) {
		const page = new this( driver );
		await page.driver.wait( until.elementLocated( page.expectedElementSelector ), page.explicitWaitMS );
		return page;
	}

	static async Visit( driver, url ) {
		const page = new this( driver, url );
		if ( ! page.url ) {
			throw new Error( `URL is required to visit the ${ page.name }` );
		}
		await page.driver.get( page.url );
		await page.driver.wait( until.elementLocated( page.expectedElementSelector ), page.explicitWaitMS );
		return page;
	}
}

In our Expect and Visit functions we call new this( driver ) which creates an instance of the child class which suits our purposes. So, this means our spec now looks like:

it( 'can wait for an element to appear', async function() {
	const page = await WebDriverJsDemoPage.Visit( driver );
	await page.waitForChildElementToAppear();
	assert( await page.childElementPresent(), 'The child element is not present' );
} );

which means we can await visiting and creating our page objects and we don’t have any asynchronous code in our constructors for our pages. Nice.

Once we’re ready to not use the promise manager we can set SELENIUM_PROMISE_MANAGER to 0 and it won’t use it any more.

Summary

The promise manager is being removed in WebDriverJs but using await in async functions is a much nicer solution anyway, so now is the time to make the move, what are you awaiting for? 😊

Check out the full demo code at https://github.com/alisterscott/webdriver-js-demo

→ The Rise of the Software Verifier

View story at Medium.com

I found this article rather interesting. I’m still not sure if some of it is satire, forgive me if I misinterpreted it.

“DevOps has become so sophisticated that there is little fear of bugs. DevOps teams can now deploy in increments, monitor logs for misbehavior, and push a new version with fixes so fast that only a few users are ever affected. Modern software development has squeezed the testers out of testing.

Features are more important than quality when teams are moving fast. Frankly, when a modern tester finds a crashing bug with strange, goofy, or non-sensical input, the development team often just groans and sets the priority of the bug to the level at which it will never actually get fixed. The art of testing and finding obscure bugs just isn’t appreciated anymore. As a result, testers today spend 80% of their time verifying basic software features, and only 20% of their time trying to break the software.”

The author doesn’t say where the 80:20 figures came from, but the testers I have worked with for the last five years have spent zero time on manual regression testing verification and most of their time actually testing the software we were developing. How did we achieve this? Not by splitting our team into testers and verifiers as the author suggests:

What to do about all this? The fix is a pretty obvious one. Software Verification is important. Software Testing is important. But, they are very different jobs. We should just call things what they are, and split the field in two. Software testers who spend their day trying to break large pieces of important software, and software verifiers, who spend their time making sure apps behave as expected day-to-day should be recognized for what they are actually doing. The world needs to see the rise of the “Software Verifier”.

We did this by focussing on automating enough tests that we were confident to release our software frequently being confident we weren’t introducing major regressions. This wasn’t 100% test coverage, it was just enough test coverage to avoid human verification. We obviously spent effort maintaining these tests, but that’s a whole team effort and it freed up a lot of time to spend the rest of our time testing the software and looking for real life bugs using human techniques.

Another thing I noted about the article was the use of the graph to show decreasing interest in software testing:

But even their interest is Software Testing fading fast…



 
This also applies to software in general, perhaps even more dramatically:


I don’t think there’s a decreasing interest in software testing, or software, but rather these have become more commonplace and more commoditised, so people need to search for these less.

Executing JS in IE11 using WebDriverJs

We write our e2e tests in JavaScript running on Node.js which allows us to use newer JavaScript/ECMAScript features like template literals.

We have a subset of our e2e tests – mainly signing up as a new customer – which we run a few times a day against Internet Explorer 11: our lowest supported IE version.

I recently added a function that sets a cookie to set the currency for a customer:

setCurrencyForPayments( currency ) {
  const setCookieCode = function( currencyValue ) {
    window.document.cookie = `landingpage_currency=${ currencyValue };domain=.wordpress.com`;
  }
return this.driver.executeScript( setCookieCode, currency );
}

This code works perfectly when executing against Chrome or Firefox, but when it came to executing against IE11 I would see the following (rather unhelpful) error:

Uncaught JavascriptError: JavaScript error (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 69 milliseconds

I couldn’t work out what was causing this so I decided to take a break. On my break I realised that WebDriverJs is trying to execute a new JavaScript feature (template literals) against an older browser that doesn’t support it! Eureka!

So all I had to do was update our code to:

setCurrencyForPayments( currency ) {
  const setCookieCode = function( currencyValue ) {
    window.document.cookie = 'landingpage_currency=' + currencyValue + ';domain=.wordpress.com';
  }
return this.driver.executeScript( setCookieCode, currency );
}

and all our tests were happy against IE11 again 😊

Having not found a lot about this error or the cause I am hoping this blog post can help someone else out if they encounter this issue also.

Creating a skills-matrix for t-shaped testers

I believe the expression “jack of all trades, master of none” is a misnomer, as I’ve mentioned previously. Being good at two or more complimentary skills is better than being excellent at just one, in my opinion.

But what about being excellent at one skill, and still being good at two or more? Why can’t we be both?

Jason Yip describes a T-shaped person and the benefits that having t-shaped people on teams brings:

A T-shaped person is capable in many things and expert in, at least, one.
As opposed to an expert in one thing (I-shaped) or a “jack of all trades, master of none” generalist, a “t-shaped person” is an expert in at least one thing but also somewhat capable in many other things. An alternate phrase for “t-shaped” is “generalizing specialist”.

jason yip
image by Jason Yip

Ideally we’d like to have a team of t-shaped testers in Flow Patrol at Automattic. But how do we get to this end goal?

I recently embarked on an exercise to measure and benchmark our skills and do just this with our team. Here’s the steps we took.

Step One – Devise Desired Team Skills

The first thing we did was come up with a list of skills that we have in the team and would like to have in the team. These can be ‘hard’ skills like a specific programming languages and ‘soft’ skills like triaging bugs. In a standard co-located team this would be as easy as conducting a brainstorming session and using affinity grouping to discover these skills. In a distributed environment I wrote a blog post to my team’s channel and had individual members comment with a list of skills they thought appropriate, and then I did the grouping and came up with a draft list of skills and groups.

Step Two – Self-assess against a team skills matrix

Once I had a final list of skills and groups (see below for full list), I put together a matrix (in a Google Spreadsheet) that listed team members on the x-axis, and the skills on the y-axis, and came up with a skill level rating. Our internal systems use a three level scale (Newbie, Comfortable, Expert) which we didn’t think was broad enough so we decided upon five levels:

1. Limited
2. Basic
3. Good
4. Strong
5. Expert

 

skills_matrix
Team Skills Matrix

I hadn’t seen Jason yip’s visual representation at that point in time, otherwise I may have used something like that, which has five similar levels:

matrix jason yip
Image by Jason Yip

Step Three – Publish results and cross-skill

Once we had the self assessments done we could then publish the data within our organisation and use the benchmark to cross-skill people in the team. In a co-located environment this could involve pair programming, in a distributed one it could involve mentoring and reviewing other team member’s work.

Have you done a skills matrix for your team? How did you do it? What did you discover?


Full List of Skills and Skill Groups for Flow Patrol at Automattic

Automattic Product Knowledge
WordPress Core
WordPress.com Simple Sites
WordPress.com Atomic Sites
Jetpack
Woocommerce
Simplenote
Mobile Apps
Human Software Testing
Flow Mapping
Bug Triage & Prioritization
Exploratory Testing (pre-release)
Dogfooding
Cross-browser Cross-device Testing
Facilitating Beta/Community Testing
Facilitating User Testing
Usability Testing
Accessibility Testing
Automated Testing
Automated End-to-end Browser Testing
Automated API/Integration Testing
Automated Unit Testing
Automated Visual Regression Testing
Android Automated Testing
iOS Automated Testing
Programming Languages
JavaScript
PHP
Shell Scripting
Objective C
Swift
Android/Kotlin
Testing Tools/Frameworks
Mocha
WebDriverJS
Git/Github
CircleCI
TravisCI
Team City (CI)
Mailosaur
Applitools
VIP Go
Docker
Other
i18n Testing
Performance Testing
Security Testing
User advocacy – empathy and compassion
Mentoring/onboarding
Project Management
Product Management
Product Development 
Calypso
Jetpack
WP.com API PHP
Woocommerce
iOS App
Android App

 

Testbash Australia 2018

I only speak at one conference a year and this year that conference will be the first ever Australian Testbash in Sydney on October 19, 2018:

TestBash_Australia_2018_Adverts_DOJO_EVENT_BANNER.png

My talk:

At WordPress.com we constantly deliver changes to our millions of customers – in the past month alone we released our React web client 563 times; over 18 releases every day. We don’t conduct any manual regression testing, and we only employ 5 software testers in a company of ~680 people with ~230 developers across . So, how do we ensure our customers get a consistently great user experience with all this rapid change?

Our automated end-to-end (e2e) tests give us confidence to release small frequent changes constantly to all our different applications and platforms knowing that our customers can continue to use our products to achieve their desires.

Alister will share how these automated tests are written and work, some of the benefits and challenges we’ve seen in implementing these tests, and what we have planned for future iterations.

Takeaways: How to effectively use automated end-to-end testing to ensure a consistent user experience in high frequency delivery environments

Grab a ticket before they sell out #

AMA: JavaScript in Test Automation

Max asks…

First of all thanks for sharing all your insights on this blog. I regularly try to come back to your blog and I helped me grow as a QA Engineer quite a lot.
I wanted to ask you on your opinion on a seemingly new generation of test tools for the web that are written in JavaScript to help deal with all the asynchronicity on modern front end frameworks. Our company is in the process of redesigning our website and it seems that it just gets harder and harder to deal with all the JavaScript in test automation. I recently started looking into some other tool like cypress.io, but as of now they still seem quite immature.
Could you help me and point me in the right direction?

My response…

I think there’s two parts to the JavaScript asynchronicity issue which are not necessarily related.

Modern JavaScript front-end web frameworks (like React and Angular) are designed to be asynchronous and fast and therefore this can cause issues when trying to write automated tests against them. I don’t believe writing automated tests in an asynchronous way actually makes the tests easier to write, or more maintainable or robust, but just a result of writing these tests in the same way the web frameworks work.

You can write synchronous tests (like using Watir/Ruby) against asynchronous web interfaces, you just need to use the waiting/polling mechanisms (or write/extend your own) – the same as you need to do in asynchronous test automation tools.

We choose WebDriverJS for automated end-to-end testing of our React application as it was the official WebDriver for JavaScript project and seems to be a good choice at the time. I somewhat regret that decision now as using a synchronous third party implementation like webdriver.io seems like an easier way to write and maintain tests.

I have tried to use cypress.io but the way it controls sites (through proxies) has (current) limitations like not working on iFrames and cross-domains which are deal-breakers for our end-to-end testing needs at present.

If you don’t need to write your end-to-end tests in JavaScript I’d say avoid it unless absolutely necessary and stick to another non-asynchronous programming language.

I’m glad I’ve been able to help you grow over the years.

AMA: hiding/changing IP addresses using Watir

Mudit Bhardwaj asks…

Hey! In order to test the security of a website, I’m trying to create hundreds of accounts. However, after a certain limit, there is always an error which prevents me from going further.
How can I hide/change my ip address during each iteration with watir?

My response…

I can understand why there would be this limitation in place as this activity seems suspicious from a systems perspective.

Watir can’t change the IP address. You can use an anonymous browser profile and/or delete cookies for different account creation runs but this probably won’t help.

I’d speak to your development/devops/systems team to whitelist your IP address(es) you are using for this purpose.

Puppeteer for Automated e2e Chrome Testing in Node

I recently noticed the new Google Chrome project Puppeteer:

Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome or Chromium.

As someone who only runs WebDriver tests in Google Chrome anyway, this looks like a promising project that bypasses WebDriver to have full programmatic control of Google Chrome including for automated end-to-end (e2e) tests.

The thing I really love about this is no Chromedriver dependency and how installing the library installs Chromium by default which can be controlled headlessly with zero config or any other dependencies.

You can even develop scripts using this playground.

I set up a (very basic) demo project that uses Mocha + Puppeteer and it runs on CircleCI with zero config. Awesome.

Five Books I Enjoyed in 2017

Happy New Year! 🎉 One thing I love about the end of the year is going through all the lists I have made during the year and counting up my stats and reflecting on things. I read (to completion) 37 books (and abandoned a further 4) in 2017. Not quite as many as 2016 when I finished 48 books but I had some difficult circumstances with family illness in 2017.

Here’s five of my favourite books I read in 2017 (with Good Reads links):

9781471138867_hrThe Art of Stillness: Adventures in Going Nowhere by Pico Iyer. This was the first TED book I read and having read quite a few more since it was probably my favourite.  I read this at a part of the year where I was over-committing to too many things (travel, conferences, events) and it was a apt reminder to slow down and appreciate things. This book led me to my mindfulness meditation practice which I took up in the later part of the year (and have continued to do since then).

how-to-fail-at-almost-everything-and-still-win-big-coverHow to Fail at Almost Everything and Still Win Big: Kind of the Story of My Life by Scott Adams. I’m not a huge fan of Dilbert, but I’m a big Scott Adams fan after reading this book. The ideas I really like are the importance of systems over goals (a good blog post about that here), and that passion follows success (passion doesn’t lead to success: you can be passionate about something and really suck at it). Also the importance of diversification. An enjoyable well written read.

29430779._SR1200,630_Payoff: The Hidden Logic That Shapes Our Motivations by Dan Ariely. The second TED book I read this year (I love the format!) and the best book about motivation I’ve read so far. I’ve tried to read Drive by Dan Pink a few times but haven’t been motivated to finish it. The thing I learnt from this book is that we lose meaning and motivation in life by outsourcing the hard work/small things (like cleaning, gardening, maintenance) – we accomplish more but get less.

subtle-art-coverThe Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life by Mark Manson. This book is a bit weird. It’s hard to read, because it jumps all over the place, but there’s so much good content it’s still worth reading. I also wish the book had a really clear list of things I could take away from the book and do, rather than understanding the general message which is a bit bleak. Still a good book and enjoyable read after all that.

27985224Deep Work: Rules for Focused Success in a Distracted World by Cal Newport. One of the last books I read this year it is particularly relevant to me working for a 100% distributed company in an asynchronous communication environment it’s easy to get distracted by constant chatter and noise and not focus enough on deep work. The thing about this book is that if you can master deep work in our current world it’s another skill that you can use to be very successful as it’s a huge competitive advantage.

*** BONUS ***

9781760630775Not work related but I read a lot of fiction particularly thrillers. Michael Connelly’s latest Harry Bosch book Two Kinds of Truth was just fantastic: hard to describe how good it was.

 

 

 

AMA: Clicking a non-visible element in Watir

Mike asks…

Is there any way to make Watir click a link/button that is not visible? I wanted to switch from Capybara/CapybaraWebKit (which allows clicking non-visible elements) but I am stuck since Watir always times out on the click attempt.

My response…

The easiest way to do this is just to execute a click in JavaScript on the object itself.

In Watir, this looks something like:

browser.execute_script( "return arguments[0].click();", browser.link(:id => 'blah')

Hope this helps!

AMA: Misc

Amruth asks…

Hi Scott, I am Amruth and I am from INDIA. I would like to automate my project by using Watir with Ruby. Plz brief me about Watir and Watir-Webdriver.

My response…

I suggest you visit http://watir.com/guides/ and follow the guides. Have fun.


Victor Hugo dos Santos asks…

How we can run our tests parallel when we use Cucumber-JVM?

My response…

I haven’t done this personally, but these instructions look helpful.


Viktoriya Musiy asks…

Just to add: I’ve tried your instruction https://watirmelon.blog/2010/12/09/how-to-set-up-cucumber-and-watir-on-osx/. Unfortunately it does not work on my macbook. I get the error message: You don’t have write permissions for the /usr/bin directory. Viktoriyas-MBP:~ viktoriyamusiy$ sudo gem install rspec –no-rdoc –no-ri Password: ERROR: While executing gem … (Gem::FilePermissionError) You don’t have write permissions for the /usr/bin directory. Viktoriyas-MBP:~ viktoriyamusiy$

My response..

That post is over seven years old so I am not surprised it no longer works. Please try watir.com for up to date instructions.


Niopvief asks…

Food of fried tomatoes like absolutely everything. Guests usually require recipe are surprised mastery hostess. Even guys with pleasure eat fried tomato in breadcrumbs. Tomatoes very good combined are combined with many products. Garlic adds sharpness. Cheese brings ruddy crust. Italian herbs turn roasted tomatoes into a dish from a restaurant. Omelette with tomatoes – easy and nutritious dish for breakfast. Make including sandwiches with fried tomatoes. Cooking tomatoes simply. For cooking use butter. In olive oil – fewer calories. It is also combined mixed with fried cubes tomatoes.

My response…

I am not a big tomato fan but my wife likes them. Thanks for the tips – Merry Christmas

AMA: automated unit vs component tests

Jason asks…

Hi Alister, Love your blog and the content. I have matured my knowledge in test automation, and without even meaning to, created a very similar test automation pyramid you derived. From it, though, i have a difficult time when trying to educate the development team the nuances between their unit level automated tests and component automated tests. How would you go about differentiating between the two? Thanks for your time! – JH

My response…

My understanding is that unit and component testing are similar but differ in their focus. For example, say I was building a table, it would consist of many parts or units:

1 x tabletop
4 x leg brackets
4 x table legs
8 x bolts

Unit testing would be testing each individual part (or unit) to make sure it is good quality but ignoring anything it connects to or requires.

Component testing would be broader in that whether the table legs work with the leg brackets as leg components, and whether the bolts with work the tabletop. I would call this component testing.

Finally testing the table fits together as whole I would call system testing, how it looks in a room or what it’s like to use: end-to-end or user acceptance testing.

When I was developing a Minesweeper game I wrote unit tests for the smallest units (eg. cells) and then component tests for groupings of cells (fields) and system tests for the game itself (interacting with fields).

The reason to do component testing is that it’s more realistic than unit testing so it’s likely to find problem where units interact. The downsides is it’s takes more time to execute and can be harder to isolate problems when they occur.

I hope this helps.

→ How Canaries Help Us Merge Good Pull Requests

I recently published an article on the WordPress.com Developer’s Blog about how we run automated canary tests on pull requests to give us confidence to release frequent changes without breaking things. Feel free to check it out.

AMA: Difference between explicit and fluent wait

Anonymous asks…

What is the difference between Explicit wait and Fluent wait?

My response…

I hadn’t heard of fluent waiting before, only explicit and implicit waiting.

From my post about Waiting in C# WebDriver:

Implicit Waiting

Implicit, or implied waiting involves setting a configuration timeout on the driver object where it will automatically wait up to this amount of time before throwing a NoSuchElementException.

The benefit of implicit waiting is that you don’t need to write code in multiple places to wait as it does it automatically for you.

The downsides to implicit waiting include unnecessary waiting when doing negative existence assertions and having tests that are slow to fail when a true failure occurs (opposite of ‘fail fast’).

Explicit Waiting

Explicit waiting involves putting explicit waiting code in your tests in areas where you know that it will take some time for an element to appear/disappear or change.

The most basic form of explicit waiting is putting a sleep statement in your WebDriver code. This should be avoided at all costs as it will always sleep and easily blow out test execution times.

WebDriver provides a WebDriverWait class which allows you to wait for an element in your code.

As for fluent waits, according to this page it’s a type of explicit wait with more limited conditions on it. I don’t believe WebDriverJs supports fluent waits.

AMA: Moving automated tests from Java to JavaScript

Anonymous asks…

I am currently using a BDD framework with Cucumber, Selenium and Java for automating a web application. I used page factory to store the objects and using them in java methods I wanted to replace the java piece of code with javaScript like mocha or webdriverio. could you share your thoughts on this? can I still use page factory to maintain objects and use them in js files

My response…

What’s the reasoning for moving to JavaScript from Java? Despite having common names, there’s very little otherwise in common (Car is to Carpet as Java is to JavaScript.)

I wouldn’t move for moving sake since I see no benefit in writing BDD style web tests in JavaScript, if anything, e2e automated tests are much harder to write in JavaScript/Node because everything is asynchronous and so you have to deal with promises etc. which is much harder to do than just using Java (or Ruby).

Aside: I still dream of writing e2e tests in Ruby: it’s just so pleasant. But our new user interface is written extensively in JavaScript (React) so it makes sense from a sustainability point of view to use JS over Ruby.