The blurry line between test and development

One of the themes I talked about during my presentation in Wellington was the blurry line between test and development in a distributed environment like Automattic.

I was recently having trouble with a complex method in our WordPress.com e2e test page objects, so I used my skills as a developer and wrote a change to our user interface which adds a data attribute to the HTML element.

This meant our page object method immediately went from this:

Continue reading “The blurry line between test and development”

Microservices: a real world story

Everywhere I turn I hear people talking about microservice architectures: it definitely feels like the latest, over-hyped, fad in software development. According to Martin Fowler:

“…the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”

[link]

But what does this mean for software testing? And how does it work in the real world?

Well, my small team is responsible for maintaining/supporting a system that was developed from scratch using a microservices architecture. I must highlight I wasn’t involved in the initial development of system but I am responsible for maintaining/expanding/keeping the system running.

The system consists of 30-40 REST microservices each with it’s own code-base, git repository, database schema and deployment mechanism. A single page web application (build in AngularJS) provides a user interface to these microservices.

Whilst there are already many microservices evangelists on board the monolith hate-train; my personal experience with this architectural style has less than pleasant for a number of reasons:

  • There is a much, much greater overhead (efficiency tax) involved in automating the integration, versioning and dependency management of so many moving parts.
  • Since each microservice has its own codebase, each microservice needs appropriate infrastructure to automatically build, version, test, deploy, run and monitor it.
  • Whilst its easy to write tests that test a particular microservice, these individual tests don’t find problems between the services or from a user experience point of view, particularly as they will often use fake service endpoints.
  • Microservices are meant to be fault tolerant as they are essentially distributed systems that are naturally erratic however since they are micro, there’s lots of them which means the overhead of testing various combinations of volatility of each microservice is too high (n factorial)
  • Monolithic applications, especially written in strongly typed/static programming languages, generally have a higher level of application/database integrity at compile time. Since microservices are independent units, this integrity can’t be verified until run time. This means more testing in later development/test environments, which I am not keen on.
  • Since a lot of problems can’t be found in testing, microservices put a huge amount of emphasis on monitoring over testing. I’d personally much rather have confidence in testing something rather than relying on constant monitoring/fixing in production. Firefighting in production by development teams isn’t sustainable and leads to impacted efficiency on future enhancements.

I can understand some of the reasoning behind breaking applications down into smaller, manageable chunks but I personally believe that microservices, like any evangelist driven approach, has taken this way too far.

I’ll finish by giving a real world metric that shows just how much overhead and maintenance is involved in maintaining our microservices architected system.

A change that would typically take us 2 hours to patch/test/deploy on our ‘monolithic’ strongly typed/static programming language system typically takes 2 days to patch/test/deploy on our microservices built system. And even then I am much less confident that the change will actually work when it gets to production.

Don’t believe the hype.

Addendum: Martin Fowler seems to have had a change of heart in his recently published ‘Microservice Premium’ article about when to use microservices:

“…my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.”

[link]

The future of testers

Bret Pettichord wrote a thought provoking blog post today that raised some interesting questions about the future of testers in automated testing:

Will there be a role for testers in the future of automated testing? Will this role be defined by developers?

I agree with a lot that Bret has to say. With the increase of new, cheaper, more open and more integrated automated testing tools, I have noticed that developers are becoming increasingly interested in, and responsible for, automated tests. Whilst traditionally automated testing tools, especially ones that test GUIs, were the responsibility of a testing team during a testing phase, these new tools can easily be integrated into software development activities.

The benefits are quite obvious; developers start making their software more testable, and as their automated tests are run more frequently, they are more likely to be kept ‘alive’, and they find more defects early: increasing quality AND velocity.

But as Bret asks, what happens to the testers in all this? Those testers who once wrote and ran those automated tests.

Like Bret, I think that testers still will have a significant role in the future of automated testing. This is because I believe that a developer and a tester have two fundamentally different perspectives. I have found that developers often have a constructive perspective: focused on building something and making sure that it works, whereas a tester has an innate deconstructive perspective, trying to break the something or prove that it won’t work. These perspectives are shown by the fact that often testers design more comprehensive negative tests than developers.

But I don’t believe that having a different perspective will save a tester: it’s not that easy. I think, to survive, new testers need to adapt. And to adapt they need to be two things:

  • Technologically Savvy: Testers will need to be increasingly technical to keep up with developers who are also now doing automated testing. For example, a developer may write a neat XML API so that a tester can write tests more efficiently and effectively, but the tester will need to know how to do so.
  • Business Focused: With both developers and testers running automated tests, a key point of differentiation is business focus. Often developers write tests for specific functionality they have built, but a tester is often required to test multiple functions. When testing these multiple functions, the tester needs to be able to demonstrate to the business what is being tested. By focusing on business scenarios (or user stories) and using business terms, it is easier to link these tests back to the business and demonstrate value.

It’s great that so much effort has been put into deliberately making Watir user friendly. It has meant that it is easy for both developers AND testers to write tests.

One area of difference is deciding on how to implement a Watir framework, because I believe that some frameworks are developer focused (Rspec, Test::Unit) whereas others are more tester focused (Confluence & Rasta).

This is why I am looking forward to seeing future WatirCraft Watir offerings in the framework space. Because of the perspective thing I mentioned, I believe that it will be challenging to design a framework that suits these two different user groups.