AMA: Separate Repository for e2e Tests?

Liam asks…

“I did enjoy reading the article about e2e test on wordpress. I noted that e2e test are in a separate repo.

My question will be what is the workflow to make sure new changes does not break the e2e test on pull request ?

For example, if a developer work on some changes, then they need to change the e2e test first and make sure everything pass, however the environment on the pull request might not be stable, developer can overwrite each other changes”

My response…

Thanks for your question Liam.

We have reasons for and benefits in having the WordPress.com e2e tests in a separate repository:

  1. The e2e tests test the entire WordPress.com experience so these test things that happen in different repositories (for example our Calypso user interface or services/API) and having them in the user interface repository isn’t really representative of what the breadth of their scope;
  2. Making changes to the e2e tests are easier in a separate repository since we don’t have to deploy e2e PRs that don’t contain functional changes (we deploy every merge to our master branch immediately dozens of times per day)

The obvious downsides are:

  1. How do we make sure e2e tests know about incoming AB tests?
  2. How do we couple new changes to updates in the e2e test repository?

For incoming AB tests we make sure that our e2e tests know about the change by ensuring we create a matching PR in our e2e tests that override our AB tests during our test runs.

If someone updates the AB tests in Calypso they’re politely reminded to update the e2e tests:

Screen Shot 2018-09-19 at 3.31.47 pm
example prompt

For making sure e2e tests are up to date we automatically run two (of about 40 total) of the most critical e2e tests (in three browsers) when a PR is ready to be reviewed. These can fail and indicate a change is necessary to the e2e tests (or something is broken!)

There’s also a label we can add to any PR that runs the entire set of e2e tests against a PR running live and reports the result back to that PR:

Screen Shot 2018-09-19 at 3.38.29 pm
e2e Test Results against a Calypso PR

If changes are required to the e2e tests someone can create an e2e PR with the exact same branch name which will be used to run against the feature changes before they are merged. This means PRs can be coupled and tested together but merged separately.

To answer the second part of your question I understand it to be about conflicting changes? One of our key philosophies for work is “merge early and merge often” so we make sure that PRs are short-lived and merged quickly to minimize the chance of conflicts. These still do happen occasionally, we just deal with them as they come up.

Whilst there’s been some downsides to the separate repositories all-in-all the benefits continue to outweigh the downsides but we constantly assess and at any point in time we can easily merge them if need be.

Three Years of Working From Home

This month marks my three year anniversary (Matticversary) at Automattic, which  means I have been solely working from home for three years now.

Working from home every day can at times feel like either, or both, the best thing in the world, or the worse thing ever invented. I personally don’t believe that full-time working from home is suitable for everyone as I’ve found it orders of magnitude more difficult than working in an office environment.

i-cant-remember-do-i-work-at-home-or-do-i-live-david-sipress

It’s taken me almost three years to really work out what works well for me as a full time “telecommuter” – and I thought I’d use my work anniversary to share some of these.

Like anything, you’re mileage my vary, but these tips have definitely helped me over the years.

    1. Create a morning routine: since I don’t commute I’ve found it hard to mentally separate finishing non-work time and starting work time in the morning which is something a commute provides (without you realising). Since I have a flexible schedule I make sure that in the mornings I have a routine. I am lucky enough to live right next to a beautiful forest so I go for a 4-5km walk each morning which not only provides exercise and nature exposure benefits but symbolises to my brain that work  begins at the end of my morning walk. This can sometimes be hard to justify to myself as some of my colleagues are finishing up around the same time as I am walking – and I may miss checking in with them – but I definitely need this morning routine.
    2. Dedicate a home office space: as a family of five we live in a small cottage so it’s hard to justify a dedicated home office space in our home, but I’ve found it vital for separating my home and work lives. When I’m in the office I am working – when I am not in the office I am not working. My three children know they can’t play in my office – if I take a break to spend time with them I come out of my work space. I read in The Organized Mind book that “students who studied for an exam in the room they later took it in did better than students who studied somewhere else”. I believe this is due to the mental association.
    3. Dedicate a computer just to work: again this seems luxurious but having a dedicated work computer, or at least a dedicated computer profile/account, creates a good association between that computer and work. From The Organized Mind book again: “Create different desktop patterns on them so that the visual cues help to remind you, and put you in the proper place-memory context, of each computer’s domain“.
    4. Disable app notifications on your phone: unless I’m an in person work meetup (which happens a few times a year) I disable app notifications on my phone for work apps (like Slack, WordPress and GMail) which means I rarely check or open those apps outside of my work environment (see #2 and #3).
    5. Avoid checking work in bed: it’s very tempting to check on work things on my mobile phone in bed, particularly when you work for a global 24×7 365 company like I do, but this disrespects the importance of sleep in our lives. As Matthew Walker, one of the world’s leading sleep scientists says in Why We Sleep: “The silent sleep loss epidemic is the greatest public health challenge we face in the twenty-first century in developed nations. If we wish to avoid the suffocating noose of sleep neglect, the premature death it inflicts, and the sickening health it invites, a radical shift in our personal, cultural, professional and societal appreciation of sleep must occur.” You can use a morning ritual (see #1) to signify when you’ll start checking work things, instead of opening your eyes in bed and checking work immediately.
    6. Create an end of work day ritual: I am my most productive in the hours leading to dinner time so I use dinner time as my end of the work day ritual when I transition from working to not-working. I also have a couple of apps on my phone I like using and I don’t let myself use them during the day so I can signify at the end of the work day when I’m not working. Like a play reward that a working dog gets when they’re finishing working.
    7. Find social activities outside of work: working from home all the time can get very, very lonely and you can feel particularly isolated – especially working for a global company with flexible work hours, and having no team members close to your timezone. It’s important to find social activities outside of work as working in an office a lot of this is a given – even something like chatting to someone in the coffee room you won’t get working from home. I’ve tried attending Toastmasters, hiking groups and making sure I get along to meetups and user groups. Also I recommend meeting up with friends and ex-colleagues for lunches – travelling to where they work. This gives you a good break from working from home and can reduce the feeling of isolation.
    8. Spend some time away from home on holiday: this is one of the most expensive parts of working from home in my opinion, and it is easy to blow all the cost savings of not-commuting on holidays away from home that you need so you don’t feel like you’re at work. If you take time off and just stay at home (sometimes known as a “staycation”) I have found I don’t feel like I’ve really disconnected from work as I’m still thinking about work being home (even though I have a dedicated workplace – see #2). I’d love to work out a way to solve this one without spending more and more money on holidays away from home.
    9. Have something to listen to: I listen to electronic dance music mixes on Mixcloud all day long – but your tastes may vary. This provides a soundtrack for working.
    10. Set yourself a daily goal: I work in a Results Only Work Environment where my employer doesn’t care so much about my hours as long as I achieve results. This is both good and bad (I could write an entire post on it) but you need to be disciplined and set your set a daily goal and stick to it. Sometimes this means working longer than you planned, other times it’s important if you achieve your daily goal earlier than expected you finish earlier than expected to make up for all those other times. Otherwise you can get caught in a never-ending work loop.
    11. Close loops: working in a distributed work environment I find I have lots of different things on the go at any one time – this is because I might be expecting a reply or some work from someone else on something and everyone works different hours and lives in various time-zones. I find it’s important to re-visit all these “open loops” and close them, just so your mind doesn’t think about them anymore. This is sometimes a case of leaving a comment like “I didn’t hear anything back on it so I’ll close it” to close the loop – and remove it from your focus and bandwidth.
    12. Find tasks that you can finish to completion outside of work: most of the things I work on have a medium-to-long event horizon – or an indefinite horizon as I await asynchronous feedback – so I’ve found hobbies that I can finish completely in one attempt – gardening, cooking a meal, washing the car etc. are very important for my mental well-being as I get a good sense of accomplishment with these tasks and hobbies. See #11
    13. Getting things done creates motivation to do more things: when I lack motivation to start something I find the smallest thing that I can get done and complete that. The sense of accomplishment of getting something done motivates me to do more.
    14. Over-communicate: I can’t emphasise this enough – no one will see you in the corner of an office getting stuck on a problem – you need to constantly communicate everything you’re doing – good and bad. Communicating context creates empathy in a distributed team.
    15. Set expectations for family and friends: some people assume you’re not really “working” if you work from home. Setting aside a work space (#2) helps – but you’ll need to continually explain to people that even though you are at home you are actually at work.
    16. Don’t be afraid to ask ‘stupid’ questions: in a work from home environment there’s bound to be other people thinking the exact same thing as you – so ask away!
    17. Don’t be afraid to be a bit weird:Today you are you, that is truer than true. There is no one alive who is youer than you” (Dr Seuss)

 

That about covers it for now. As I said, working from home can be very rewarding, but it’s one of the hardest challenges I’ve ever faced, and after three years I’m still working at it!

AMA: Iterative vs Incremental Development

Mario asks:

I have a question to ask your post on iterative vs. Incremental Software Development:

Iterative vs Incremental Software Development

In the incremental approach, the few features implemented in all of their requirements can be changed after user feedback? Or, does this only happen with the iterative approach?

My response:

Thanks for your question Mario. This can, and should, happen with both approaches, but I’d say the incremental approach is actually more likely to get customer/user feedback as it’s a more polished, albeit smaller, user experience, and therefore more likely to land in front of users. The painting analogy isn’t the best as the requirements and level of ‘done’ are pretty clear, but the general rule is to seek feedback as soon as possible, and both approaches are designed to do just that.

Avoiding LGTM PR Cultures

Introduction

Making a code change when using a distributed version control system (DVCS) like Git is usually done by packaging a change on a branch as a “pull request” (PR) which indicates the author would like the project to “pull” the change into it.

This was, and is, a key part of open source projects as it allows outside contributors to contribute to a project in a controlled way, however many internal software development teams also work in this fashion as there are many benefits of this approach over committing directly to a shared branch or trunk.

I’ve seen the pull request approach have a positive impact on software quality since pull requests facilitate discussion through peer reviews and allow running of automated tests against every commit and change that is proposed to be merged into the default branch.

taken lgtm.jpg

What is a LGTM PR culture?

I’ve also seen some negative behaviours emerge when moving to pull request based development which I’ll call a LGTM PR culture.

LGTM is a common acronym found in peer reviews of pull requests which means “Looks Good To Me”, and I’ve seen teams let unsuitable changes through with LGTM comments without doing solid peer reviews and testing.

How do you know if you have a LGTM PR culture?

One way to “test” your peer review process is by creating PRs and leaving a subtle bug or something not quite right that you know about in the PR. When it gets reviewed do you get a LGTM? I did this recently and whilst the PR didn’t even do what it was meant to do I received a LGTM 😕

2dsag2

How can you move away from a LGTM PR culture?

It’s tempting to just tell everyone to do better peer reviews but it’s not that easy!

I’ve found there’s some steps that an author of a pull request can do to facilitate better pull request reviews and lead towards a better culture.

1. Make pull requests as small as possible:

The smaller the pull request the more likely you’ll get specific and broad feedback on it – and you can then iterate on that feedback. A 500 line change is daunting to review and will lead to more LGTMs. For larger refactorings where there’ll be lots of lines changed, you can start with a small focussed change and get lots of review and discussion, and once the refactoring is established with a smaller example you can easily apply that feedback to a broader impact PR that won’t need as much feedback as you’ve established the new pattern previously.

2. Review your own pull request

Review your own work. This works best if you do something else then come back to it with a fresh mind. Anything you’re unsure about or doesn’t look right leave it as a comment on your own PR to encourage other reviewers to look closely at those areas also.

3. Include clear instructions for reviewing and testing your pull request

A list of test steps is good as well as asking for what type of feedback you’d like – you can explicitly ask reviewers something like “please leave a comment after your review listing what you tested and what areas of the code you reviewed.” This discourages shortcuts and LGTMs.

4. Test your peer review process – see above.

Conclusion

Developing software using pull requests can mean much higher quality code and less technical debt due to the feedback on peer reviews that accompany pull requests. As an author you can take steps to ensuring pull requests are easy to review and encourage a culture of effective peer reviews.

npm ci

I recently discovered npm ci which you can use instead of npm install when running a node project on continuous integration (CI) system and want to install your npm dependencies. It does this in a more lightweight, more CI friendly way.

If you use npm test to run your tests, this can be shortened to npm t (much like npm i is npm install), and therefore you can run npm cit to install dependencies and run tests in CI.

Running Mocha e2e Tests in Parallel

I recently highlighted the importance of e2e test design. Once you have well designed e2e tests you can start running them in parallel.

There are a couple of approaches to scaling your tests out to be run in parallel:

  1. Running the tests in multiple machine processes;
  2. Running the tests across multiple (virtual) machines;

These aren’t mutually exclusive, you can run tests in parallel processes across multiple virtual machines – we do this at Automattic – each test run happens across two virtual machines, Docker containers on CircleCI, each of which runs six processes for either desktop or mobile responsive browser sizes depending on the container.

I have found running tests in multiple processes gives best bang for buck since you don’t need additional machines (most build systems charge based on container utilisation) and you’ll benefit from parallel runs on a local machine.

We write our e2e tests in WebDriverJs and use Mocha for our test framework. We currently use a tool called Magellan to run our e2e tests in separate processes (workers), however the tool is losing Mocha support and therefore we need to look at alternatives.

I have found that mocha-parallel-tests seems like the best replacement – it’s a drop in replacement runner for mocha tests which splits test specification files across processes available on your machine you’re executing your tests on – you can also specify a maximum limit of processes as the command line argument --max-parallel

There is another parallel test runner for mocha: mocha.parallel – but this requires updating all your specs to use a different API to allow the parallelisation to work. I like mocha-parallel-tests as it just works.

I’ve updated my webdriver-js-demo project to use mocha-parallel-tests – feel free to check it out here.

Running e2e Tests in Parallel

One of the best ways to speed up your end-to-end (e2e) tests is to start running them in parallel.

The main issue I see that prevents teams from fully using parallelism for their e2e tests is lack of test design. Without adequately designed e2e tests – which have been designed to be run in parallel – parallelism can introduce non-deterministic and inconsistent test results – leading to frustration and low-confidence in the e2e tests.

This is often the case when teams go about directly converting manual test cases into automated e2e tests – instead of approaching e2e test automation with a specific end-to-end design focus.

Say you had a manual test case for inviting someone to view your WordPress blog:

  1. Enter the email address of the person you’d like to follow your site
  2. Check the list shows a pending invite
  3. Check your email inbox shows a pending invite
  4. Open the email, follow the link and sign up for a new account

When you’re manually testing this in sequence it’s easy to follow – but as soon as you start running this in parallel, across different builds, and with other tests things will most likely start failing.

Why? The test isn’t specific enough – you may have multiple pending invites – so how do you know which one is which? You can only invite someone once, how do you generate new invite emails? You may receive multiple emails at any one time – which one is which? And more.

With appropriate e2e test design you can write the e2e test to be consistent when run regardless of parallelism:

  1. Email addresses are uniquely generated for each test run using a GUID and either a test email API like Mailosaur or Gmail plus addressing; and
  2. The pending email list has a data attribute on each invite clearly identifying which email the invite is for and this is used to verify pending email status; and
  3. The inbox is filtered by the expected GUID, and only those emails are used. Etc.

Once you have good e2e test design in place you’re able to look at how to speed up e2e test execution using parallelism. I’ll cover how to do this in my next blog post.

Bailing with Mocha e2e Tests

At Automattic we use Mocha to write our end-to-end (e2e) automated tests in JavaScript/Node.js. One issue with Mocha is that it’s not really a tool suited to writing e2e tests where one test step can rely on a previous test step – for example our sign up process is a series of pages/steps which rely on the previous step passing. Mocha is primarily a unit testing tool and it’s bad practice for one unit test to depend on another, so that is why Mocha doesn’t support this.

A more simplified example of this is shown in my webdriver-js-demo project:

describe( 'Ralph Says', function() {
	this.timeout( mochaTimeoutMS );

	before( async function() {
		const builder = new webdriver.Builder().withCapabilities( webdriver.Capabilities.chrome() );
		driver = await builder.build();
	} );

	it( 'Visit the page', async function() {
		page = await RalphSaysPage.Visit( driver );
	} );

	it( 'shows a quote container', async function() {
		assert( await page.quoteContainerPresent(), 'Quote container not displayed' );
	} );

	it( 'shows a non-empty quote', async function() {
		assert.notEqual( await page.quoteTextDisplayed(), '', 'Quote is empty' );
	} );

	afterEach( async function() { await driver.manage().deleteAllCookies(); } );

	after( async function() { await driver.quit(); } );
} );

Continue reading “Bailing with Mocha e2e Tests”

Using async/await with WebDriverJs

We’ve been using WebDriverJs for a number of years and the control flow promise manager that it offers to make writing WebDriverJs commands in a synchronous blocking way a bit easier, particularly when using promises.

The problem with the promise manager is that it is hard to understand its magic as sometimes it just works, and other times it was very confusing and not very predictable. It was also harder to develop and support by the Selenium project so it’s being deprecated later this year.

Fortunately recent versions of Node.js support asynchronous functions and use of the await command which makes writing WebDriverJs tests so much easier and understandable.

I’ve recently updated my WebDriverJs demo project to use async/await so I’ll use that project as examples to explain what is involved.

WebDriverJs would allow you to write consecutive statements like this without worrying about waiting for each statement to finish – note the use of test.it instead of the usual mocha it function:

test.it( 'can wait for an element to appear', function() {
	const page = new WebDriverJsDemoPage( driver, true );
	page.waitForChildElementToAppear();
	page.childElementPresent().then( ( present ) => {
		assert( present, 'The child element is not present' );
	} );
} );

When you were waiting on the return value from a promise you could use a .then function to wait for the value as shown above.

This is quite a simple example and this could get complicated pretty quickly.

Since the promise manager is being removed, we need to update our tests so they continue to execute in the correct order. We can make the test function asynchronous by adding the async prefix, remove the test. prefix on the it block, and add await statements every time we expect a statement to finish before continuing:

it( 'can wait for an element to appear', async function() {
	const page = new WebDriverJsDemoPage( driver, true );
	await page.waitForChildElementToAppear();
	assert( await page.childElementPresent(), 'The child element is not present' );
} );

I personally find this much easier to read and understand, less ‘magic’, but the one bit that stands out is visiting the page and creating the new page object. The code in the constructor for this page, and other pages, is asynchronous as well, however we can’t have an async constructor!

export default class BasePage {
	constructor( driver, expectedElementSelector, visit = false, url = null ) {
		this.explicitWaitMS = config.get( 'explicitWaitMS' );
		this.driver = driver;
		this.expectedElementSelector = expectedElementSelector;
		this.url = url;

		if ( visit ) this.driver.get( this.url );

		this.driver.wait( until.elementLocated( this.expectedElementSelector ), this.explicitWaitMS );
	}
}

How we can get around this is to define a static async function that acts as a constructor and returns our new page object for us.

So, our BasePage now looks like:

export default class BasePage {
	constructor( driver, expectedElementSelector, url = null ) {
		this.explicitWaitMS = config.get( 'explicitWaitMS' );
		this.driver = driver;
		this.expectedElementSelector = expectedElementSelector;
		this.url = url;
	}

	static async Expect( driver ) {
		const page = new this( driver );
		await page.driver.wait( until.elementLocated( page.expectedElementSelector ), page.explicitWaitMS );
		return page;
	}

	static async Visit( driver, url ) {
		const page = new this( driver, url );
		if ( ! page.url ) {
			throw new Error( `URL is required to visit the ${ page.name }` );
		}
		await page.driver.get( page.url );
		await page.driver.wait( until.elementLocated( page.expectedElementSelector ), page.explicitWaitMS );
		return page;
	}
}

In our Expect and Visit functions we call new this( driver ) which creates an instance of the child class which suits our purposes. So, this means our spec now looks like:

it( 'can wait for an element to appear', async function() {
	const page = await WebDriverJsDemoPage.Visit( driver );
	await page.waitForChildElementToAppear();
	assert( await page.childElementPresent(), 'The child element is not present' );
} );

which means we can await visiting and creating our page objects and we don’t have any asynchronous code in our constructors for our pages. Nice.

Once we’re ready to not use the promise manager we can set SELENIUM_PROMISE_MANAGER to 0 and it won’t use it any more.

Summary

The promise manager is being removed in WebDriverJs but using await in async functions is a much nicer solution anyway, so now is the time to make the move, what are you awaiting for? 😊

Check out the full demo code at https://github.com/alisterscott/webdriver-js-demo

→ The Rise of the Software Verifier

View story at Medium.com

I found this article rather interesting. I’m still not sure if some of it is satire, forgive me if I misinterpreted it.

“DevOps has become so sophisticated that there is little fear of bugs. DevOps teams can now deploy in increments, monitor logs for misbehavior, and push a new version with fixes so fast that only a few users are ever affected. Modern software development has squeezed the testers out of testing.

Features are more important than quality when teams are moving fast. Frankly, when a modern tester finds a crashing bug with strange, goofy, or non-sensical input, the development team often just groans and sets the priority of the bug to the level at which it will never actually get fixed. The art of testing and finding obscure bugs just isn’t appreciated anymore. As a result, testers today spend 80% of their time verifying basic software features, and only 20% of their time trying to break the software.”

The author doesn’t say where the 80:20 figures came from, but the testers I have worked with for the last five years have spent zero time on manual regression testing verification and most of their time actually testing the software we were developing. How did we achieve this? Not by splitting our team into testers and verifiers as the author suggests:

What to do about all this? The fix is a pretty obvious one. Software Verification is important. Software Testing is important. But, they are very different jobs. We should just call things what they are, and split the field in two. Software testers who spend their day trying to break large pieces of important software, and software verifiers, who spend their time making sure apps behave as expected day-to-day should be recognized for what they are actually doing. The world needs to see the rise of the “Software Verifier”.

We did this by focussing on automating enough tests that we were confident to release our software frequently being confident we weren’t introducing major regressions. This wasn’t 100% test coverage, it was just enough test coverage to avoid human verification. We obviously spent effort maintaining these tests, but that’s a whole team effort and it freed up a lot of time to spend the rest of our time testing the software and looking for real life bugs using human techniques.

Another thing I noted about the article was the use of the graph to show decreasing interest in software testing:

But even their interest is Software Testing fading fast…



 
This also applies to software in general, perhaps even more dramatically:


I don’t think there’s a decreasing interest in software testing, or software, but rather these have become more commonplace and more commoditised, so people need to search for these less.

Executing JS in IE11 using WebDriverJs

We write our e2e tests in JavaScript running on Node.js which allows us to use newer JavaScript/ECMAScript features like template literals.

We have a subset of our e2e tests – mainly signing up as a new customer – which we run a few times a day against Internet Explorer 11: our lowest supported IE version.

I recently added a function that sets a cookie to set the currency for a customer:

setCurrencyForPayments( currency ) {
  const setCookieCode = function( currencyValue ) {
    window.document.cookie = `landingpage_currency=${ currencyValue };domain=.wordpress.com`;
  }
return this.driver.executeScript( setCookieCode, currency );
}

This code works perfectly when executing against Chrome or Firefox, but when it came to executing against IE11 I would see the following (rather unhelpful) error:

Uncaught JavascriptError: JavaScript error (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 69 milliseconds

I couldn’t work out what was causing this so I decided to take a break. On my break I realised that WebDriverJs is trying to execute a new JavaScript feature (template literals) against an older browser that doesn’t support it! Eureka!

So all I had to do was update our code to:

setCurrencyForPayments( currency ) {
  const setCookieCode = function( currencyValue ) {
    window.document.cookie = 'landingpage_currency=' + currencyValue + ';domain=.wordpress.com';
  }
return this.driver.executeScript( setCookieCode, currency );
}

and all our tests were happy against IE11 again 😊

Having not found a lot about this error or the cause I am hoping this blog post can help someone else out if they encounter this issue also.

Creating a skills-matrix for t-shaped testers

I believe the expression “jack of all trades, master of none” is a misnomer, as I’ve mentioned previously. Being good at two or more complimentary skills is better than being excellent at just one, in my opinion.

But what about being excellent at one skill, and still being good at two or more? Why can’t we be both?

Jason Yip describes a T-shaped person and the benefits that having t-shaped people on teams brings:

A T-shaped person is capable in many things and expert in, at least, one.
As opposed to an expert in one thing (I-shaped) or a “jack of all trades, master of none” generalist, a “t-shaped person” is an expert in at least one thing but also somewhat capable in many other things. An alternate phrase for “t-shaped” is “generalizing specialist”.

jason yip
image by Jason Yip

Ideally we’d like to have a team of t-shaped testers in Flow Patrol at Automattic. But how do we get to this end goal?

I recently embarked on an exercise to measure and benchmark our skills and do just this with our team. Here’s the steps we took.

Step One – Devise Desired Team Skills

The first thing we did was come up with a list of skills that we have in the team and would like to have in the team. These can be ‘hard’ skills like a specific programming languages and ‘soft’ skills like triaging bugs. In a standard co-located team this would be as easy as conducting a brainstorming session and using affinity grouping to discover these skills. In a distributed environment I wrote a blog post to my team’s channel and had individual members comment with a list of skills they thought appropriate, and then I did the grouping and came up with a draft list of skills and groups.

Step Two – Self-assess against a team skills matrix

Once I had a final list of skills and groups (see below for full list), I put together a matrix (in a Google Spreadsheet) that listed team members on the x-axis, and the skills on the y-axis, and came up with a skill level rating. Our internal systems use a three level scale (Newbie, Comfortable, Expert) which we didn’t think was broad enough so we decided upon five levels:

1. Limited
2. Basic
3. Good
4. Strong
5. Expert

 

skills_matrix
Team Skills Matrix

I hadn’t seen Jason yip’s visual representation at that point in time, otherwise I may have used something like that, which has five similar levels:

matrix jason yip
Image by Jason Yip

Step Three – Publish results and cross-skill

Once we had the self assessments done we could then publish the data within our organisation and use the benchmark to cross-skill people in the team. In a co-located environment this could involve pair programming, in a distributed one it could involve mentoring and reviewing other team member’s work.

Have you done a skills matrix for your team? How did you do it? What did you discover?


Full List of Skills and Skill Groups for Flow Patrol at Automattic

Automattic Product Knowledge
WordPress Core
WordPress.com Simple Sites
WordPress.com Atomic Sites
Jetpack
Woocommerce
Simplenote
Mobile Apps
Human Software Testing
Flow Mapping
Bug Triage & Prioritization
Exploratory Testing (pre-release)
Dogfooding
Cross-browser Cross-device Testing
Facilitating Beta/Community Testing
Facilitating User Testing
Usability Testing
Accessibility Testing
Automated Testing
Automated End-to-end Browser Testing
Automated API/Integration Testing
Automated Unit Testing
Automated Visual Regression Testing
Android Automated Testing
iOS Automated Testing
Programming Languages
JavaScript
PHP
Shell Scripting
Objective C
Swift
Android/Kotlin
Testing Tools/Frameworks
Mocha
WebDriverJS
Git/Github
CircleCI
TravisCI
Team City (CI)
Mailosaur
Applitools
VIP Go
Docker
Other
i18n Testing
Performance Testing
Security Testing
User advocacy – empathy and compassion
Mentoring/onboarding
Project Management
Product Management
Product Development 
Calypso
Jetpack
WP.com API PHP
Woocommerce
iOS App
Android App

 

Testbash Australia 2018

I only speak at one conference a year and this year that conference will be the first ever Australian Testbash in Sydney on October 19, 2018:

TestBash_Australia_2018_Adverts_DOJO_EVENT_BANNER.png

My talk:

At WordPress.com we constantly deliver changes to our millions of customers – in the past month alone we released our React web client 563 times; over 18 releases every day. We don’t conduct any manual regression testing, and we only employ 5 software testers in a company of ~680 people with ~230 developers across . So, how do we ensure our customers get a consistently great user experience with all this rapid change?

Our automated end-to-end (e2e) tests give us confidence to release small frequent changes constantly to all our different applications and platforms knowing that our customers can continue to use our products to achieve their desires.

Alister will share how these automated tests are written and work, some of the benefits and challenges we’ve seen in implementing these tests, and what we have planned for future iterations.

Takeaways: How to effectively use automated end-to-end testing to ensure a consistent user experience in high frequency delivery environments

Grab a ticket before they sell out #

AMA: JavaScript in Test Automation

Max asks…

First of all thanks for sharing all your insights on this blog. I regularly try to come back to your blog and I helped me grow as a QA Engineer quite a lot.
I wanted to ask you on your opinion on a seemingly new generation of test tools for the web that are written in JavaScript to help deal with all the asynchronicity on modern front end frameworks. Our company is in the process of redesigning our website and it seems that it just gets harder and harder to deal with all the JavaScript in test automation. I recently started looking into some other tool like cypress.io, but as of now they still seem quite immature.
Could you help me and point me in the right direction?

My response…

I think there’s two parts to the JavaScript asynchronicity issue which are not necessarily related.

Modern JavaScript front-end web frameworks (like React and Angular) are designed to be asynchronous and fast and therefore this can cause issues when trying to write automated tests against them. I don’t believe writing automated tests in an asynchronous way actually makes the tests easier to write, or more maintainable or robust, but just a result of writing these tests in the same way the web frameworks work.

You can write synchronous tests (like using Watir/Ruby) against asynchronous web interfaces, you just need to use the waiting/polling mechanisms (or write/extend your own) – the same as you need to do in asynchronous test automation tools.

We choose WebDriverJS for automated end-to-end testing of our React application as it was the official WebDriver for JavaScript project and seems to be a good choice at the time. I somewhat regret that decision now as using a synchronous third party implementation like webdriver.io seems like an easier way to write and maintain tests.

I have tried to use cypress.io but the way it controls sites (through proxies) has (current) limitations like not working on iFrames and cross-domains which are deal-breakers for our end-to-end testing needs at present.

If you don’t need to write your end-to-end tests in JavaScript I’d say avoid it unless absolutely necessary and stick to another non-asynchronous programming language.

I’m glad I’ve been able to help you grow over the years.

AMA: hiding/changing IP addresses using Watir

Mudit Bhardwaj asks…

Hey! In order to test the security of a website, I’m trying to create hundreds of accounts. However, after a certain limit, there is always an error which prevents me from going further.
How can I hide/change my ip address during each iteration with watir?

My response…

I can understand why there would be this limitation in place as this activity seems suspicious from a systems perspective.

Watir can’t change the IP address. You can use an anonymous browser profile and/or delete cookies for different account creation runs but this probably won’t help.

I’d speak to your development/devops/systems team to whitelist your IP address(es) you are using for this purpose.

Puppeteer for Automated e2e Chrome Testing in Node

I recently noticed the new Google Chrome project Puppeteer:

Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome or Chromium.

As someone who only runs WebDriver tests in Google Chrome anyway, this looks like a promising project that bypasses WebDriver to have full programmatic control of Google Chrome including for automated end-to-end (e2e) tests.

The thing I really love about this is no Chromedriver dependency and how installing the library installs Chromium by default which can be controlled headlessly with zero config or any other dependencies.

You can even develop scripts using this playground.

I set up a (very basic) demo project that uses Mocha + Puppeteer and it runs on CircleCI with zero config. Awesome.

Five Books I Enjoyed in 2017

Happy New Year! 🎉 One thing I love about the end of the year is going through all the lists I have made during the year and counting up my stats and reflecting on things. I read (to completion) 37 books (and abandoned a further 4) in 2017. Not quite as many as 2016 when I finished 48 books but I had some difficult circumstances with family illness in 2017.

Here’s five of my favourite books I read in 2017 (with Good Reads links):

9781471138867_hrThe Art of Stillness: Adventures in Going Nowhere by Pico Iyer. This was the first TED book I read and having read quite a few more since it was probably my favourite.  I read this at a part of the year where I was over-committing to too many things (travel, conferences, events) and it was a apt reminder to slow down and appreciate things. This book led me to my mindfulness meditation practice which I took up in the later part of the year (and have continued to do since then).

how-to-fail-at-almost-everything-and-still-win-big-coverHow to Fail at Almost Everything and Still Win Big: Kind of the Story of My Life by Scott Adams. I’m not a huge fan of Dilbert, but I’m a big Scott Adams fan after reading this book. The ideas I really like are the importance of systems over goals (a good blog post about that here), and that passion follows success (passion doesn’t lead to success: you can be passionate about something and really suck at it). Also the importance of diversification. An enjoyable well written read.

29430779._SR1200,630_Payoff: The Hidden Logic That Shapes Our Motivations by Dan Ariely. The second TED book I read this year (I love the format!) and the best book about motivation I’ve read so far. I’ve tried to read Drive by Dan Pink a few times but haven’t been motivated to finish it. The thing I learnt from this book is that we lose meaning and motivation in life by outsourcing the hard work/small things (like cleaning, gardening, maintenance) – we accomplish more but get less.

subtle-art-coverThe Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life by Mark Manson. This book is a bit weird. It’s hard to read, because it jumps all over the place, but there’s so much good content it’s still worth reading. I also wish the book had a really clear list of things I could take away from the book and do, rather than understanding the general message which is a bit bleak. Still a good book and enjoyable read after all that.

27985224Deep Work: Rules for Focused Success in a Distracted World by Cal Newport. One of the last books I read this year it is particularly relevant to me working for a 100% distributed company in an asynchronous communication environment it’s easy to get distracted by constant chatter and noise and not focus enough on deep work. The thing about this book is that if you can master deep work in our current world it’s another skill that you can use to be very successful as it’s a huge competitive advantage.

*** BONUS ***

9781760630775Not work related but I read a lot of fiction particularly thrillers. Michael Connelly’s latest Harry Bosch book Two Kinds of Truth was just fantastic: hard to describe how good it was.

 

 

 

AMA: Clicking a non-visible element in Watir

Mike asks…

Is there any way to make Watir click a link/button that is not visible? I wanted to switch from Capybara/CapybaraWebKit (which allows clicking non-visible elements) but I am stuck since Watir always times out on the click attempt.

My response…

The easiest way to do this is just to execute a click in JavaScript on the object itself.

In Watir, this looks something like:

browser.execute_script( "return arguments[0].click();", browser.link(:id => 'blah')

Hope this helps!

AMA: Misc

Amruth asks…

Hi Scott, I am Amruth and I am from INDIA. I would like to automate my project by using Watir with Ruby. Plz brief me about Watir and Watir-Webdriver.

My response…

I suggest you visit http://watir.com/guides/ and follow the guides. Have fun.


Victor Hugo dos Santos asks…

How we can run our tests parallel when we use Cucumber-JVM?

My response…

I haven’t done this personally, but these instructions look helpful.


Viktoriya Musiy asks…

Just to add: I’ve tried your instruction https://watirmelon.blog/2010/12/09/how-to-set-up-cucumber-and-watir-on-osx/. Unfortunately it does not work on my macbook. I get the error message: You don’t have write permissions for the /usr/bin directory. Viktoriyas-MBP:~ viktoriyamusiy$ sudo gem install rspec –no-rdoc –no-ri Password: ERROR: While executing gem … (Gem::FilePermissionError) You don’t have write permissions for the /usr/bin directory. Viktoriyas-MBP:~ viktoriyamusiy$

My response..

That post is over seven years old so I am not surprised it no longer works. Please try watir.com for up to date instructions.


Niopvief asks…

Food of fried tomatoes like absolutely everything. Guests usually require recipe are surprised mastery hostess. Even guys with pleasure eat fried tomato in breadcrumbs. Tomatoes very good combined are combined with many products. Garlic adds sharpness. Cheese brings ruddy crust. Italian herbs turn roasted tomatoes into a dish from a restaurant. Omelette with tomatoes – easy and nutritious dish for breakfast. Make including sandwiches with fried tomatoes. Cooking tomatoes simply. For cooking use butter. In olive oil – fewer calories. It is also combined mixed with fried cubes tomatoes.

My response…

I am not a big tomato fan but my wife likes them. Thanks for the tips – Merry Christmas

AMA: automated unit vs component tests

Jason asks…

Hi Alister, Love your blog and the content. I have matured my knowledge in test automation, and without even meaning to, created a very similar test automation pyramid you derived. From it, though, i have a difficult time when trying to educate the development team the nuances between their unit level automated tests and component automated tests. How would you go about differentiating between the two? Thanks for your time! – JH

My response…

My understanding is that unit and component testing are similar but differ in their focus. For example, say I was building a table, it would consist of many parts or units:

1 x tabletop
4 x leg brackets
4 x table legs
8 x bolts

Unit testing would be testing each individual part (or unit) to make sure it is good quality but ignoring anything it connects to or requires.

Component testing would be broader in that whether the table legs work with the leg brackets as leg components, and whether the bolts with work the tabletop. I would call this component testing.

Finally testing the table fits together as whole I would call system testing, how it looks in a room or what it’s like to use: end-to-end or user acceptance testing.

When I was developing a Minesweeper game I wrote unit tests for the smallest units (eg. cells) and then component tests for groupings of cells (fields) and system tests for the game itself (interacting with fields).

The reason to do component testing is that it’s more realistic than unit testing so it’s likely to find problem where units interact. The downsides is it’s takes more time to execute and can be harder to isolate problems when they occur.

I hope this helps.