AMA: ranking automated testing attributes, in order of importance

Kevin Emery asks..

Do you think you could you rank these in order of importance, when building automated tests for a full-featured web application

  • Coverage – all important features are tested
  • Failure reliability – tests do not pass when the feature being tested is broken
  • Pass reliability – tests do not fail when the feature being tested is working
  • Performance – the tests are able to run and get results back in a timely manner
  • Maintainability – the tests can be easily updated when the website or application changes
  • Readability – the tests are written in a manner that can be quickly understood by a new developer on the team

My response…

Wow, what a question. I’ve decided to tackle this using an insertion sort as that’s a fairly efficient way to sort an unsorted list.

So, would I rather high coverage or failure reliability? Failure reliability because even if you have the best ever coverage, if the tests do not tell you things are broken when they’re broken, then it’s pointless. This gives us:

  1. Failure reliability – tests do not pass when the feature being tested is broken
  2. Coverage – all important features are tested

Would I rather coverage or pass reliability? Coverage because pass reliability (false negatives) can be manually assessed.

  1. Failure reliability – tests do not pass when the feature being tested is broken
  2. Coverage – all important features are tested
  3. Pass reliability – tests do not fail when the feature being tested is working

Would I rather performance or pass reliability? Pass reliability as performance can be addressed with hardware and parallelism etc.

  1. Failure reliability – tests do not pass when the feature being tested is broken
  2. Coverage – all important features are tested
  3. Pass reliability – tests do not fail when the feature being tested is working
  4. Performance – the tests are able to run and get results back in a timely manner

Would I rather maintainability or performance? Depends on what naming standards/conventions are used in the application but generally speaking maintainability is more important to me than performance.
Would I then rather maintainability or pass reliability? I would rather maintainability.
Would I then rather coverage or maintainability? Coverage.

  1. Failure reliability – tests do not pass when the feature being tested is broken
  2. Coverage – all important features are tested
  3. Maintainability – the tests can be easily updated when the website or application changes
  4. Pass reliability – tests do not fail when the feature being tested is working
  5. Performance – the tests are able to run and get results back in a timely manner

Finally, would I rather readability or performance? Readability.
Would I then rather readability or pass reliability? Readability.
Would I rather readability or maintainability? Maintainability.

Which gives us:

  1. Failure reliability – tests do not pass when the feature being tested is broken
  2. Coverage – all important features are tested
  3. Maintainability – the tests can be easily updated when the website or application changes
  4. Readability – the tests are written in a manner that can be quickly understood by a new developer on the team
  5. Pass reliability – tests do not fail when the feature being tested is working
  6. Performance – the tests are able to run and get results back in a timely manner

I think the one attribute you left out is application testability. Having a testable full featured web app leads to reliable, maintainable, readable automated tests which are typically more performant. The only thing it doesn’t give directly is coverage, but since it’s so testable it’s easy to write tests so your coverage should be higher naturally.

Kevin Emery also asks…

Does the relative order of importance of those automation test priorities change, when considering a login based web app versus a public-facing website?

I don’t think so. The only possible thing I would change is coverage (covering authenticated vs non-authenticated access), but since that’s my #2 and I still don’t consider that more important than failure reliability, the list would remain the same :)

Finally, Kevin Emery asks…

How much revenue have you earned from your Pride and Paradev e-book?

I have earned just over $900 from 532 happy readers (including free ‘purchases’), and I have donated well over a third of that to the Wikimedia Foundation. I wouldn’t say the book was ‘successful’ considering I average over 1000 readers per day on this site, however, almost all of the eBook content is freely available on this site anyway so it was never intended to be a money making exercise.

 

 

Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.

2 thoughts on “AMA: ranking automated testing attributes, in order of importance”

  1. Thanks so much for your reply. This is encouraging to hear, since the framework I’m working on is much stronger in your top three than your bottom three. And you’re absolutely right about testability: on a healthy team, it should be something that automation is accountable for.

    I have worked on teams in the past where there’s an unhealthy culture of “push the feature out” which doesn’t encourage developers to prioritize the needs of automation testers, but I think that when web app is easy to automate, that means it is also easy maintain. Most notably, consistency in class attributes and liberal usage of well-worded ids are important for not only an app’s testability but also its flexibility and maintainability

    Liked by 1 person

Comments are closed.