→ The Rise of the Software Verifier

View story at Medium.com

I found this article rather interesting. I’m still not sure if some of it is satire, forgive me if I misinterpreted it.

“DevOps has become so sophisticated that there is little fear of bugs. DevOps teams can now deploy in increments, monitor logs for misbehavior, and push a new version with fixes so fast that only a few users are ever affected. Modern software development has squeezed the testers out of testing.

Features are more important than quality when teams are moving fast. Frankly, when a modern tester finds a crashing bug with strange, goofy, or non-sensical input, the development team often just groans and sets the priority of the bug to the level at which it will never actually get fixed. The art of testing and finding obscure bugs just isn’t appreciated anymore. As a result, testers today spend 80% of their time verifying basic software features, and only 20% of their time trying to break the software.”

The author doesn’t say where the 80:20 figures came from, but the testers I have worked with for the last five years have spent zero time on manual regression testing verification and most of their time actually testing the software we were developing. How did we achieve this? Not by splitting our team into testers and verifiers as the author suggests:

What to do about all this? The fix is a pretty obvious one. Software Verification is important. Software Testing is important. But, they are very different jobs. We should just call things what they are, and split the field in two. Software testers who spend their day trying to break large pieces of important software, and software verifiers, who spend their time making sure apps behave as expected day-to-day should be recognized for what they are actually doing. The world needs to see the rise of the “Software Verifier”.

We did this by focussing on automating enough tests that we were confident to release our software frequently being confident we weren’t introducing major regressions. This wasn’t 100% test coverage, it was just enough test coverage to avoid human verification. We obviously spent effort maintaining these tests, but that’s a whole team effort and it freed up a lot of time to spend the rest of our time testing the software and looking for real life bugs using human techniques.

Another thing I noted about the article was the use of the graph to show decreasing interest in software testing:

But even their interest is Software Testing fading fast…

https://ssl.gstatic.com/trends_nrtr/1386_RC02/embed_loader.js

trends.embed.renderExploreWidget(“TIMESERIES”, {“comparisonItem”:[{“keyword”:”software testing”,”geo”:””,”time”:”2004-01-01 2018-05-03″}],”category”:0,”property”:””}, {“exploreQuery”:”date=all&q=software%20testing”,”guestPath”:”https://trends.google.com:443/trends/embed/”});

 
This also applies to software in general, perhaps even more dramatically:

https://ssl.gstatic.com/trends_nrtr/1386_RC02/embed_loader.js

trends.embed.renderExploreWidget(“TIMESERIES”, {“comparisonItem”:[{“keyword”:”software”,”geo”:””,”time”:”2004-01-01 2018-05-03″}],”category”:0,”property”:””}, {“exploreQuery”:”date=all&q=software”,”guestPath”:”https://trends.google.com:443/trends/embed/”});

I don’t think there’s a decreasing interest in software testing, or software, but rather these have become more commonplace and more commoditised, so people need to search for these less.

Creating a skills-matrix for t-shaped testers

I believe the expression “jack of all trades, master of none” is a misnomer, as I’ve mentioned previously. Being good at two or more complimentary skills is better than being excellent at just one, in my opinion.

But what about being excellent at one skill, and still being good at two or more? Why can’t we be both?

Jason Yip describes a T-shaped person and the benefits that having t-shaped people on teams brings:

A T-shaped person is capable in many things and expert in, at least, one.
As opposed to an expert in one thing (I-shaped) or a “jack of all trades, master of none” generalist, a “t-shaped person” is an expert in at least one thing but also somewhat capable in many other things. An alternate phrase for “t-shaped” is “generalizing specialist”.

jason yip
image by Jason Yip

Ideally we’d like to have a team of t-shaped testers in Flow Patrol at Automattic. But how do we get to this end goal?

I recently embarked on an exercise to measure and benchmark our skills and do just this with our team. Here’s the steps we took.

Step One – Devise Desired Team Skills

The first thing we did was come up with a list of skills that we have in the team and would like to have in the team. These can be ‘hard’ skills like a specific programming languages and ‘soft’ skills like triaging bugs. In a standard co-located team this would be as easy as conducting a brainstorming session and using affinity grouping to discover these skills. In a distributed environment I wrote a blog post to my team’s channel and had individual members comment with a list of skills they thought appropriate, and then I did the grouping and came up with a draft list of skills and groups.

Step Two – Self-assess against a team skills matrix

Once I had a final list of skills and groups (see below for full list), I put together a matrix (in a Google Spreadsheet) that listed team members on the x-axis, and the skills on the y-axis, and came up with a skill level rating. Our internal systems use a three level scale (Newbie, Comfortable, Expert) which we didn’t think was broad enough so we decided upon five levels:

1. Limited
2. Basic
3. Good
4. Strong
5. Expert

 

skills_matrix
Team Skills Matrix

I hadn’t seen Jason yip’s visual representation at that point in time, otherwise I may have used something like that, which has five similar levels:

matrix jason yip
Image by Jason Yip

Step Three – Publish results and cross-skill

Once we had the self assessments done we could then publish the data within our organisation and use the benchmark to cross-skill people in the team. In a co-located environment this could involve pair programming, in a distributed one it could involve mentoring and reviewing other team member’s work.

Have you done a skills matrix for your team? How did you do it? What did you discover?


Full List of Skills and Skill Groups for Flow Patrol at Automattic

Automattic Product Knowledge
WordPress Core
WordPress.com Simple Sites
WordPress.com Atomic Sites
Jetpack
Woocommerce
Simplenote
Mobile Apps
Human Software Testing
Flow Mapping
Bug Triage & Prioritization
Exploratory Testing (pre-release)
Dogfooding
Cross-browser Cross-device Testing
Facilitating Beta/Community Testing
Facilitating User Testing
Usability Testing
Accessibility Testing
Automated Testing
Automated End-to-end Browser Testing
Automated API/Integration Testing
Automated Unit Testing
Automated Visual Regression Testing
Android Automated Testing
iOS Automated Testing
Programming Languages
JavaScript
PHP
Shell Scripting
Objective C
Swift
Android/Kotlin
Testing Tools/Frameworks
Mocha
WebDriverJS
Git/Github
CircleCI
TravisCI
Team City (CI)
Mailosaur
Applitools
VIP Go
Docker
Other
i18n Testing
Performance Testing
Security Testing
User advocacy – empathy and compassion
Mentoring/onboarding
Project Management
Product Management
Product Development 
Calypso
Jetpack
WP.com API PHP
Woocommerce
iOS App
Android App

 

The blurry line between test and development

One of the themes I talked about during my presentation in Wellington was the blurry line between test and development in a distributed environment like Automattic.

I was recently having trouble with a complex method in our WordPress.com e2e test page objects, so I used my skills as a developer and wrote a change to our user interface which adds a data attribute to the HTML element.

This meant our page object method immediately went from this:

Continue reading “The blurry line between test and development”

Test for Real Life

“Most of us are anxious pretty much all the time – but frequently imagine that other people aren’t. It’s time to admit the truth. Anxiety is just a basic fact about being human.”

~ Alain de Botton

We are all human, we are all worried and anxious pretty much all the time, people just don’t tell you that they are. We wear masks and we hide it well.

But why do we test like we’re not anxious or worried? Why don’t we test for real life?

Continue reading “Test for Real Life”

Make sure your end-to-end tests align with your company’s strategy

I recently embarked on writing some new automated end-to-end tests for an existing product that has been around for some time but has never had e2e automated tests written for it.

Continue reading “Make sure your end-to-end tests align with your company’s strategy”

Should you close old bugs?

Do you actively close bugs because they reach a certain age?

One of the (many) things I love about Automattic is the attention that is given to bug triage. Bug triage is the habit of continually grooming our bug lists to ensure they are constantly relevant, updated and reflective of the current state of our products. A benefit of this is that an up-to-date and prioritized bug list translates directly into a backlog of maintenance work items for a product development team.

Continue reading “Should you close old bugs?”