I found this article rather interesting. I’m still not sure if some of it is satire, forgive me if I misinterpreted it.
“DevOps has become so sophisticated that there is little fear of bugs. DevOps teams can now deploy in increments, monitor logs for misbehavior, and push a new version with fixes so fast that only a few users are ever affected. Modern software development has squeezed the testers out of testing.
Features are more important than quality when teams are moving fast. Frankly, when a modern tester finds a crashing bug with strange, goofy, or non-sensical input, the development team often just groans and sets the priority of the bug to the level at which it will never actually get fixed. The art of testing and finding obscure bugs just isn’t appreciated anymore. As a result, testers today spend 80% of their time verifying basic software features, and only 20% of their time trying to break the software.”
The author doesn’t say where the 80:20 figures came from, but the testers I have worked with for the last five years have spent zero time on manual regression testing verification and most of their time actually testing the software we were developing. How did we achieve this? Not by splitting our team into testers and verifiers as the author suggests:
What to do about all this? The fix is a pretty obvious one. Software Verification is important. Software Testing is important. But, they are very different jobs. We should just call things what they are, and split the field in two. Software testers who spend their day trying to break large pieces of important software, and software verifiers, who spend their time making sure apps behave as expected day-to-day should be recognized for what they are actually doing. The world needs to see the rise of the “Software Verifier”.
We did this by focussing on automating enough tests that we were confident to release our software frequently being confident we weren’t introducing major regressions. This wasn’t 100% test coverage, it was just enough test coverage to avoid human verification. We obviously spent effort maintaining these tests, but that’s a whole team effort and it freed up a lot of time to spend the rest of our time testing the software and looking for real life bugs using human techniques.
Another thing I noted about the article was the use of the graph to show decreasing interest in software testing:
But even their interest is Software Testing fading fast…
I believe the expression “jack of all trades, master of none” is a misnomer, as I’ve mentioned previously. Being good at two or more complimentary skills is better than being excellent at just one, in my opinion.
But what about being excellent at one skill, and still being good at two or more? Why can’t we be both?
Jason Yip describes a T-shaped person and the benefits that having t-shaped people on teams brings:
A T-shaped person is capable in many things and expert in, at least, one.
As opposed to an expert in one thing (I-shaped) or a “jack of all trades, master of none” generalist, a “t-shaped person” is an expert in at least one thing but also somewhat capable in many other things. An alternate phrase for “t-shaped” is “generalizing specialist”.
Ideally we’d like to have a team of t-shaped testers in Flow Patrol at Automattic. But how do we get to this end goal?
I recently embarked on an exercise to measure and benchmark our skills and do just this with our team. Here’s the steps we took.
Step One – Devise Desired Team Skills
The first thing we did was come up with a list of skills that we have in the team and would like to have in the team. These can be ‘hard’ skills like a specific programming languages and ‘soft’ skills like triaging bugs. In a standard co-located team this would be as easy as conducting a brainstorming session and using affinity grouping to discover these skills. In a distributed environment I wrote a blog post to my team’s channel and had individual members comment with a list of skills they thought appropriate, and then I did the grouping and came up with a draft list of skills and groups.
Step Two – Self-assess against a team skills matrix
Once I had a final list of skills and groups (see below for full list), I put together a matrix (in a Google Spreadsheet) that listed team members on the x-axis, and the skills on the y-axis, and came up with a skill level rating. Our internal systems use a three level scale (Newbie, Comfortable, Expert) which we didn’t think was broad enough so we decided upon five levels:
I hadn’t seen Jason yip’s visual representation at that point in time, otherwise I may have used something like that, which has five similar levels:
Step Three – Publish results and cross-skill
Once we had the self assessments done we could then publish the data within our organisation and use the benchmark to cross-skill people in the team. In a co-located environment this could involve pair programming, in a distributed one it could involve mentoring and reviewing other team member’s work.
Have you done a skills matrix for your team? How did you do it? What did you discover?
Full List of Skills and Skill Groups for Flow Patrol at Automattic
One of the themes I talked about during my presentation in Wellington was the blurry line between test and development in a distributed environment like Automattic.
I was recently having trouble with a complex method in our WordPress.com e2e test page objects, so I used my skills as a developer and wrote a change to our user interface which adds a data attribute to the HTML element.
This meant our page object method immediately went from this:
Do you actively close bugs because they reach a certain age?
One of the (many) things I love about Automattic is the attention that is given to bug triage. Bug triage is the habit of continually grooming our bug lists to ensure they are constantly relevant, updated and reflective of the current state of our products. A benefit of this is that an up-to-date and prioritized bug list translates directly into a backlog of maintenance work items for a product development team.