AMA: testing and technical debt

Sean asks…

There’s a web team that is comprised of people who likely grew up being the smartest people in the room. Over time, their code base is reviewed by other folks who rely on them as the “oracles” who know all. Their code is right.

Any tips on making the business case for testing? Have you ever quantified the technical debt where you’ve worked? Any tips on when to start testing a project (e.g.: is there a rule of thumb for a size to break even)?

My response…

In my experience I have found here’s two questions a web team should be continually answering about the features they are building: are we building the right thing? and are we building the thing right? It sounds like it isn’t really a question of whether they are building the thing right, but they may not be building the right thing.

There’s zero point building something right if it’s not the right thing, and this is where I have seen a tester provide the most value by providing a different mindset and asking questions early in the development process rather than just testing something is built right at the end.

As Rands in Repose elegantly put it:

It’s not that QA can discover what is wrong, they intimately understand what is right and they unfailingly strive to push the product in that direction.

As for whether I’ve quantified technical debt of a product: firstly, I really like how Martin Fowler categorises technical debt into quadrants based upon four attributes: reckless/prudent and deliberate/inadvertent: with any form of reckless technical debt being fairly obviously the worst kind, and deliberate technical debt being better than inadvertent.

On one project I worked on we had a physical technical debt board on the wall which we used to list and reduce technical debt. How it worked was we had a circle with sectors based either on architecture (database, services, UI etc.) or product function (authentication, admin, ordering etc.).

tech debt board

As soon as someone noticed some technical debt (eg. lack of test coverage) they would immediately add this as a sticky note on the outer ring of the circle. Every few days immediately following our daily standup we’d have a technical debt talk where we’d move tech debt items around, typically moving them towards the centre of the board as they became more of an issue – they also might become a non-issue so we’d tear them up.

When a technical debt issue made its way to the red hot centre (core), we would add fixing that debt to a user story in the upcoming backlog so that the technical debt was fixed as part of a user story in that area of our system, this avoided having non-functional user stories that weren’t delivering business value.

Doing this activity meant we were constantly ensuring our technical debt was prudent and deliberate.

We never quantified technical debt by measuring something about the code. If you’ve ever researched how to measure technical debt there’s a lot of suggestions: measure duplicated code, measure unit test coverage, measure cyclomatic complexity (unique paths through application code) etc. But most teams I know of rely on a binary gut instinct: is this a good or bad codebase? Can we release new features quickly without introducing showstopper bugs?

We maintained a list of known technical debt that we’d constantly evaluate to make sure we were being deliberate. We could have counted how many issues we had on that board, but we were more interested in their content and whether they were something we would deliberately fix (or ignore).

Finally, I am not sure what your question about when to start testing and the size aspect means. I believe doing testing or involving a tester as early as possible, even if it’s there input on designs/wireframes/prototypes is the best thing you can possibly do, as I mentioned, as you can avoid building the wrong things, which is far worse, IMO, than building the right things with some imperfections or technical debt.

Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.

1 thought on “AMA: testing and technical debt”

  1. One way to quantify technical debt is via static analysis tools like this one for PHP: The results can show what needs the most attention, with code files receiving a score for various criteria: complexity, probable bugs, maintainability, accessibility for new developers, and things of that nature.

    You can run it once to get a health check for the project, or continuously monitor it with a status dashboard that shows graphs and stats, and a table list of the worst offenders so you can tackle those first.

    Here’s a screenshot as example: (from PHP code running on a project).

    Liked by 1 person

Comments are closed.