GitHub & Bitbucket

We use Git on Bitbucket in my current role, and I didn’t realise how much I liked using GitHub until I started using Bitbucket on a regular basis to commit and test code changes.

The biggest difference is how these systems handle squashed commits into a master branch.

With Bitbucket you can do the usual approach of multiple commits on a branch/pull request:

When you go to merge this to master, you can choose squash commits:

which a nice way to make a cleaner commit history on the master branch:

However if you look at the branch/PR now that it is merged you will notice you’ve lost all commit history! 😿

This has been super frustrating for helping us diagnose what went wrong during the development of a change where an issue was introduced.

Comparing this same workflow to GitHub, you can see that you can see individual commits against a branch, and squash these into master:

After merging you can still see the full commit history on the PR and branch:

and it is squashed on the master commit history:

Has anyone else noticed this with Bitbucket? Any known workarounds to keep commit history on branches/PRs?

Now, Next, Later, Never (improving MoSCoW)

Our team sets quarterly objectives, which we break down into requirements spread across fortnightly sprints. As the paradev on my team I work closely with our product owner to write and prioritise these requirements.

We originally started using the MoSCoW method to prioritize our requirements:

The term MoSCoW itself is an acronym derived from the first letter of each of four prioritization categories (Must have, Should have, Could have, and Won’t have

Wikipedia

We quickly started noticing that the terminology (Must have, Should have, Could have, Won’t have) didn’t really work well in our context and how we were thinking, and this caused a lot of friction in how we were prioritizing, and adoption by the team. It didn’t feel natural to classify things as must, should, could, won’t as it didn’t directly correlate into what we should be working on.

Over a few sessions we came up with our own groupings for our requirements based upon when we would like to see them in our product: Now, Next, Later, Never. We’ve continued to use these four terms and we’ve found they have been very well adopted by the team as it’s very natural for us to think in these groupings.

The biggest benefit of using Now, Next, Later, Never is they naturally translate into our product roadmap and sprint planning.

I did some research in writing this post and found Now, Next, Later as a thing from ThoughtWorks back in 2012, but I couldn’t find any links that included the Never grouping as well which we’ve found very useful to call out what we agree that we won’t be doing.

How do you go about prioritizing your requirements?

Questions with a Consolee

When I worked at ThoughtWorks they had this thing called The Interview where every so often Ryan Boucher would publish an interview with a co-worker. The thing I really liked about it was that the person talking was only revealed at the very end.

I’ve stolen the idea and implemented it with modified questions and a portrait photo in my current workplace: Console.


How do you manage to stay on top of things?

Principally, I:

1. I write out lots and lots and lots of lists so that my mind is free to begin thinking clearly.

2. I accept that some days I’ll produce perfect work right out of the gate, and others I’ll be stuck polishing turds. Know what kind of day it is and roll with it.

What skill are you interested in learning next?

I want to get better at cinematography and special effects editing.

What do you love about your current role?

I love that I get to regularly push myself to my creative limits, and that my work shapes people’s experience of Console in myriad subtle ways.

What would you put on a billboard?

Everything counts. Yes, even that.

How do you go about leaving your work brain at work?

It takes me about 40 mins to walk home from work, and 15 mins to bus. I walk, and process the day as I go. It helps me be way more present when I get home.

What is an unusual habit or absurd thing you love?

I can’t go anywhere without a box of Eclipse spearmint sugarfree mints, and I mean anywhere: including to bed. I don’t necessarily use them, I just have a mental thing where I have to have them there NO MATTER WHAT. When I travel, I’ll bring about 2-3 packets for every week I plan to be away. I once traveled for 9 months straight, and I brought maybe 3 cartons (so 24 boxes) with me, and when I ran out there was no decent substitution. It was pretty much the first thing I bought at the airport when I came home >_>

Here’s a pic of my beloved when I was in a state of withdrawal in Canada

What have you changed your mind about?

Dungeons and dragons. I thought it was for nerds. Turns out, I’m a nerd.

When you feel overwhelmed or unfocused what do you do?

My eyes go completely black and people look at me with fear in their eyes. [This is a joke but it is also kind of true. I have a panic face that unnerves my colleagues.] More seriously, I do a lap of the block, sit and write another list, and then just start somewhere—anywhere.

How do you go about making the world a better place?

I try to make the people around me feel special and loved. Whether it’s Console Academy Awards, excessive quantities of fairy lights, glitter bombs or unexpected treats, I’m always looking for ways to make people feel like there’s a little bit of magic in the world.

What question do you wish people would ask you?

What do you love about marketing? I think everyone either has their own idea about what marketing is, or they have ‘a marketing idea’. But there’s so much art and science and theory and creativity and hard bloody yakka to be good at marketing, and I think people have written it off as neither particularly prestigious nor difficult to do well. It’s not a noble profession, but it’s super interesting!

… Maybe that’s why people don’t ask me about it. They know I’ll talk their ear off.

Who are you?

Continue reading “Questions with a Consolee”

Accessibility is good for everyone

It’s great to see the recent changes to Automattic’s long-term hiring processes based upon a user research study into how their approach to tech hiring resonates with women and non-binary folks:

In May, Automattic’s engineering hiring team launched a user research study to better understand how our approach to tech hiring resonates with women and non-binary folks who may experience similar gender discrimination in the workplace and are experienced developers.

What changes did we make?

  • Existing work and life commitments mean that it is important to know the details of the hiring process at the outset: we have published a public page that clearly outlines our hiring process so that people have a concrete understanding of the expectations.
  • We removed all the little games from our job posting page. We were trying to test people’s attention to the job posting and filter out unmotivated candidates; it turned out we were also putting people off who we want to apply.
  • We removed all the language that emphasized that hiring is a competitive process -for instance, removing language about application volume.

Whilst I don’t fit into their target audience for this study, if these changes had been implemented earlier I would have personally benefited from these, instead of being disheartened about waiting for 4 years for a response to a job application that never came (I did eventually work up the courage to apply again at which time I was successful).

This example shows that making your recruitment processes more clear and accessible makes it better for everyone, not just those who experience discrimination – much like web accessibility benefits everyone, regardless of ability.

Experimenting with our Agile Story Wall

After three and a half years of working by myself at home, it feels truly great to be working in a co-located cross-functional team again. My squad consists of four developers and myself, and I had forgotten how much I love being a paradev. I wear many hats every day in my job, which I love, and one of these hats is managing our iterations of work: our fortnightly sprints.

We follow lightweight agile software development practices but in the spirit of this our squad is empowered to experiment and adjust our techniques to delivery good quality software quickly.

We aim to work on items of work in small chunks and have these features being used by our customers as quickly as possible by releasing software at least daily to production.

Typically we’ve been using a simple physical agile story wall which is good when you’re working on small and fairly independent chunks of functionality, but we’ve found not all work fits into this style of working.

We recently had an initiative that involved lots of small tasks, but there were a lot inter-dependencies between the tasks – as we need to do data creation, migration, back end services migration and new front end development. Our standard agile story wall which looked like this was very bad at showing us what was dependent on what:

Agile Story Wall

As a team we decided to experiment with using our agile story wall to map the dependencies we have between work and also to show when we’re able to release the various pieces of functionality. The two week sprints were less relevant as we release as soon as we can. We still have some pieces of independent work (eg. bug fixes and tech debt) which we kept tracking using the standard Kanban style columns under our main board:

Dependency Wall v1.jpg

This gave us instant benefits: the dependencies were not only very clear but elastic. We used a whiteboard marker to draw lines between tasks which meant as we discovered new or unnecessary dependencies we could quickly remove or re-arrange these lines. But we also quickly realized that in mapping our dependencies out this way we lost one key attribute of our information radiator: we couldn’t see the status of our pieces of work at quick glance which the standard status based wall gave us. Since we’re using a physical wall we can quickly adapt so we added some sticky dots to indicate status of individual cards: blue for in progress, a smaller yellow dot is in-review, and green for done since blue + yellow = green (I’m happy to take the credit for that). We also added red for blocked when we discovered we had our first blocked piece of work, and a legend in case the colours weren’t immediately obvious:

Dependency Wall v2Once our 4 week initiative was over we found the dependency wall was so useful we’ve decided to continue using it for the main focus of each sprint, and continue using the standard status based columns for less critical things. We’ll continue to refine and iterate.

Lessons Learned:

  1. Having an old-fashioned physical agile story wall is powerful is a lot of ways, and one of the most powerful things about it is how easy it is to modify and adapt your wall to whatever suits you best at the time: we couldn’t have achieved what we did if we were using a digital board like JIRA or Trello.
  2. Standard agile story walls are good for showing status of work, but are very weak at showing interdepencies between stories and tasks – the major software solutions suffer from this.
  3. A software team being agile isn’t about just doing sprint planning, standups, a story wall and retrospectives from a textbook. It’s about delivering software rapidly in the best possible way for your team and your context – and continually experimenting and adjusting your ways of working is crucial to this.

Identifying elements having different states in automated e2e tests

I was recently writing an automated end-to-end (e2e) test that was expanding a section then taking an action within the expanded content.

A very simplified example is the details html tag that expands to show content:

When do you start on red and stop on green?

When you’re eating a watermelon!

I initially wrote the test so that it clicked the details element each time the test ran, something like:

async expandJoke() {
	return await this.driver.findElement( by.css( '#joke' ) ).click();
}

The problem with this straightforward approach is that if the element is already open the click will still perform and the test will continue and then subsequently fail when the test tries to access the content within the expanded section which is now collapsed 😾

I wanted to make sure the test was as resilient and consistent as possible, so instead of just assuming the section was already collapsed I then wrote a function like this to expand the element if it wasn’t expanded and then continue the test:

async expandJokeIfNecessary() {
	const open = await this.driver.findElement( by.css( '#joke' ) ).getAttribute( 'open' );
	if (!open) {
		this.expandJoke()
	}
}

The benefits of this are the test is the most resilient, since it caters for whether the UI is open or closed by checking the open attribute and acting accordingly. I realised that the problem with this approach was that our user experience expects this section to be closed on page load (the punchline is hidden on page load in our example) so if we introduced a bug where we immediately displayed the punchline our test would completely miss it since it just skips expanding the section!

Both these approaches use the same selector to refer to a web element which can have two entirely different states: open and not open.

Keeping this in mind the best solution I could come up with was a selector that combines the element and the state, so the test will fail to click the element to expand the section if it can’t be found, including if the element is already expanded. This gives us simple code that is deterministic and fails at the appropriate time:

async assertAndExpandJoke() {
	return await this.driver.findElement( by.css( '#joke:not([open])' ) ).click();
}

What’s your preferred approach here? How do you identify elements in different states?

All demo code is available here: #

My Thoughts on Cypress.io

Run asks…

I’ve just started using Cypress.io. As someone who initially learned about testing conventions years ago through your blog, cypress seems to want to burn all old conventions to the ground in a way that immediately turned me off. After playing with it a bit and watching a talk by one of its founders, I’m a little more convinced now. It’s a great tool. Do you have an opinion on Cypress yet, and do you think old testing conventions are becoming obsolete thanks to much better reporting tools around testing?

There certainly seems to be a lot of hype and enthusiasm around Cypress.io. I recently saw another (rather evangelical) talk about Cypress.io here in Brisbane so I thought it was time to share my thoughts.

What exactly is Cypress.io?

Looking at Cypress.io it is described as a “JavaScript End to End Testing Framework” and “Fast, easy and reliable testing for anything that runs in a browser“. Some other descriptions on Cypress.io include “A complete end-to-end testing experience.” and “Cypress is the new standard in front-end testing that every developer and QA engineer needs.” 

Screen Shot 2019-07-06 at 2.54.48 pm.png

What is end-to-end testing?

I believe there are some specific traits that define what automated end-to-end (e2e) tests are:

  • They test a complete user flow through an application from start to finish (end-to-end)
  • They test how a real user would use a using a fully deployed system
  • They test the happy-path of the most commonly used scenarios, avoiding error validation or edge-cases

End-to-end tests are expensive to maintain and execute so the widely accepted view is to have as few of these as possible for your application, which means avoiding things like negative and error validation testing during end-to-end tests as these things can be tested much more easily and quickly in isolation as other types of automated tests (unit, component or integration).

Is Cypress.io a framework for writing end-to-end tests?

Despite its strongly worded marketing material, I don’t believe Cypress.io has been designed as an end-to-end (e2e) testing framework. I believe this was confirmed by Brian Mann at Assert(js) in the first half of the “I see your point, but…” presentation:

“You should always strive to test pages in total isolation – everything becomes faster, less coupled, and you won’t lose a single point of confidence that it’s all working together correctly. You don’t need to limit yourself trying to act and replicate everything a user would do.”

Screen Shot 2019-07-06 at 2.28.05 pm

I believe what Brian is referring to aren’t end-to-end tests but rather component tests. Brian showed using Cypress.io to test a login page where he wrote 6 isolated test specs to test login validation:

Screen Shot 2019-07-06 at 2.34.36 pm

What makes the question about whether Cypress.io is an e2e testing framework even more confusing is during the second half of the same presentation, Gleb Bahmutov, also from Cypress.io, states:

“Brian showed how we think about end-to-end testing. To us end-to-end should do the same things that a human would do to a fully deployed system. Right, that means real browser, real interactions, no shortcuts…”

Screen Shot 2019-07-06 at 2.45.19 pm

Also, confusingly, he stated that e2e test tools can do a pretty good job in unit testing:

Screen Shot 2019-07-06 at 2.45.38 pm

So what is Cypress.io then?

I now consider Cypress.io to be a strongly opinionated framework suited to writing isolated automated web component tests. I’m not sure why it is marketing as an e2e testing tool, and trying to compare itself to something like Selenium, which isn’t a component testing tool.

If you wanted to write isolated automated web component tests then Cypress.io would be worth a look at, since it offers many features to help you. However for true end-to-end purposes I think the limitations outweigh the benefits. The open source WordPress Gutenberg editor project tried Cypress.io for quite a while but ultimately found it too limiting and switched to Puppeteer.

Some things to considering when trying to use Cypress.io for true end-to-end testing

Even though Cypress.io is demonstrated as a way to write isolated web component tests, if you still want to use it to write true end-to-end tests then there’s some tradeoffs you need to consider.

Screen Shot 2019-07-06 at 2.55.49 pm

Despite the claims that “Cypress works on any front-end framework or website” and “Fast, easy and reliable testing for anything that runs in a browser” there are quite a few scenarios where you can’t use Cypress – for example if your front-end or website uses iFrames you can’t use Cypress.io.

iFrames

Cypress.io itself uses iFrames to inject itself into the browser so supporting iFrames whether on the same domain or cross domain aren’t supported with an open issue since 2016. At WordPress.com we used iFrames for the WordPress site customizer so it wouldn’t be possible to write an end-to-end test for WordPress.com using Cypress.io. In my current role I work on a web application which actually a series of React micro frontends which are rendered in iFrames within a web container, so we also can’t use Cypress.io for end-to-end testing.

Native browser events like file uploads and downloads

Things like uploading and downloading files that are trivial to do in WebDriver are either difficult or not supported in Cypress.io. Even things as simple as using the tab key isn’t supported.

Parallelism

Whilst Cypress is often promoted as a free and open source project there are certain features that are only available when running your tests in “record” mode with the Cypress Dashboard Service, which allows 500 tests (it blocks) to run in parallel per month before needing to pay for it. Parallel execution is one of these features, so you can’t even run tests in parallel locally without recording your results to the dashboard service.

The biggest issue I see with Cypress.io parallelization is that it is machine based, not process based:

parallelization-diagram.4c4cbac6

In this example, the CI container costs triple when each CI machine should be more than capable of running multiple Chromium browser sessions.

At WordPress.com we used CircleCI and we able to have up to 12 headless Chrome browsers using WebDriverJs in parallel on each CircleCI container, and across 3 containers this allowed 36 e2e tests running in parallel. To get the same result using Cypress.io would mean paying for 36 CircleCI containers.

Running e2e tests written in other e2e testing tools in parallel machine processes can be quite easy as I’ve explained previously on this blog.

There are times when running via the command line isn’t the same as running via the GUI

I noticed this when writing a demo cypress spec which would pass when running in the Cypress GUI runner but fail on the command line, which looking at these comments doesn’t seem uncommon.

    it( 'ignores alerts when leaving the page', function() {
        cy.visit('http://webdriverjsdemo.github.io/leave'); 
        cy.get('#homelink').click();
        cy.contains('WebDriverJs Demo Page').should('be.visible');
    } );

GUI Runner:

Screen Shot 2019-07-06 at 4.33.53 pm

Command Line:

Screen Shot 2019-07-06 at 4.37.17 pm

Logging in

One of the key messages of the first video was demonstrating you can log in without using the UI for subsequent tests which speeds things up. This is a good idea, but isn’t unique to Cypress.io: at WordPress.com we re-used a single login cookie across multiple e2e tests using WebDriverJS – the code is here.

Summary

From a distance Cypress looks like a polished tool for automated testing – I just think it’s incorrectly marketed as an end-to-end testing tool when it’s really only good for component testing. There are too many limitations in the tool in acting like a real user to use it to create true end-to-end automated tests.