I recently received the following email from a WatirMelon reader Kiran, and was about to reply with my answer when instead I asked to reply via a blog post as I think it’s an interesting topic.
“I see most of the Open source projects do not have a dedicated manual QA team to perform any kind of testing. But every Organization has dedicated manual QA teams to validate their products before release, yet they fail to meet quality standards.
How does these open source projects manage to deliver stuff with great quality without manual testers? (One reason i can think of is, developers of these projects have great technical skills and commitment than developers in Organizations).
Few things I know about open source projects is that they all have Unit tests and some automated tests which they run regularly.But still I can’t imagine delivering something without manual testing…Is it possible?”
I’ll start by stating that not all organizations have dedicated manual QA teams to validate their products before release. I used the example of Facebook in my book, and I presently work in an organization where there isn’t a dedicated testing team. But generally speakingI agree that most medium to large organizations have testers of some form, whereas most open source projects do not.
I think the quality of open source comes down to two key factors which are essential to high quality software: peer reviews and automated tests.
Open source projects by their very nature need to be open to contribution from various people. This brings about great benefit, as you get diversity of input and skills, and are able to utilize a global pool of talent, but with this comes the need for a safety net to ensure quality of the software is maintained.
Open source projects typically work on a fork/pull request model where all work is done in small increments in ‘forks’ which are provided as pull requests to be merged into the main repository. Distributed version control systems allow this to happen very easily and facilitate a code review system of pull requests before they are merged into the main repository.
Whilst peer reviews are good, these aren’t a replacement for testing, and this is where open source projects need to be self-tested via automated tests. Modern continuous integration systems like CircleCI and TravisCI allow automatic testing of all new pull requests to an open source project before they are even considered to be merged.
If you have a look at most open source project pages you will most likely see prominent real time ‘build status’ badges to indicate the realtime quality of the software.
Peer reviews and automated tests cover contributions and regression testing, but how does an open source project test new features?
Most open source projects test new changes in the wild through dogfooding (open source projects often exist to fill a need and open source developers are often consumers of their own products), and pre-release testing like alpha and beta distributions. For example, the Chromium project has multiple channels (canary, dev, beta, stable) where anyone can test upcoming Chromium/Chrome features before they are released to the general public (this isn’t limited to open source software: Apple does the same with OSX and iOS releases).
By using a combination of peer reviews, extensive automated regression testing, dogfooding and making pre-release candidates available I believe open source projects can release very high quality software without having dedicated testers.
If an organization would like to move away from having a dedicated, separate test team to smaller self-sustaining delivery teams responsible for quality into production (which my present organization does), they would need to follow these practices such as peer reviews and maintaining a very high level of automated test coverage. I still believe there’s a role for a tester on such a team in advocating quality, making sure that new features/changes are appropriately tested, and that the automated regression test coverage is sufficient.