Manoj Kumar K asked me via LinkedIn for some feedback on his article about the differences between Continuous Integration/Testing/Delivery and Deployment.
Here’s my response (left as a comment on the original article):
Continuous Integration is continually integrating several developer’s code, compiling, deploying and automatically testing that code (Continuous Testing), and that can be done on a branch, or in trunk/master.
When you have continual production readiness of every build, but you don’t deploy every build automatically, instead you choose to deploy say x times a week, that is Continuous Delivery.
When you automatically deploy every build to production after it’s been automatically integrated, built and tested, that is Continuous Deployment.
I’ve previously briefly explained the semantics here.
So Continuous Integration (CI) and Continuous Testing together allow Continuous Delivery. Continuous Delivery allows Continuous Deployment (should you choose to do this).
If your continous integration (build) box doesn’t have a solid state drive: go buy one.
Solid state drives are the quickest and cheapest way to speed software development. As most continuous integrations/builds are IO, solid state drives provide tremendous speed increases: I’ve seen builds and tests run in less than a fifth of the time a traditional moving hard disk would take. Solid state drives have reached a price point where the lower capacity ones are on par with higher capacity hard disks (which you don’t need unless you’re storing movies etc.)
It used to be that the build machine would use some crappy hardware lying around: often an old development machine someone no longer used. This is one of the worst decisions you can make on a project.
Having a super fast SSD based build machine will ensure fast, reliable build results, fast feedback and a happy development team.
The web application I am currently working on must meet accessibility standards: WCAG 2.0 AA to be precise. WCAG 2.0 is a collection of guidelines that ensure your site is accessible for people with disabilities.
An example of poor accessibility design is missing an alt tag on an image, or not specifying a language of a document, eg:
Building Accessibility In
Whilst later we’ll do doing accessibility assessments with people who are blind or have low vision, we need to make sure we build accessibility in. To do this, we need automated accessibility tests as part of our continuous integration build pipeline. My challenge this week was to do just this.
Automated Accessibility Tests
First I needed to find a tool to validate against the spec. We’re developing the web application locally so we’ll need to run it locally. There’s a tool called TotalValidator which offers a command line tool, the only downside is the licensing cost, as to use the command line version you need to buy at least 6 copies of the tool at £25 per copy, approximately US$240 in total. There’s no trial of command line tool unless you buy at least one copy of the pro tool at £25. I didn’t want to spend money on something that might not work so I kept looking for alternatives.
There are two sites I found that validate accessibility by URL or supplied HTML code: AChecker and WAVE.
AChecker: this tool which works really well. It even supplies an REST API, but I couldn’t find a way to call the API to validate HTML code (instead of by URL) which is what I would like to do. The software behind AChecker is open source (PHP) so you can actually install your own version behind your firewall if you wish.
WAVE: a new tool recently released by WebAim: Web Accessibility in Mind. Again this is an online checker that allows you to validate by URL or HTML code supplied, but unfortunately there’s no API (yet) and the results aren’t as easy to programatically read.
The final solution I came up with is a specific tagged web accessibility feature in our acceptance tests project. This has scenarios that use WebDriver to navigate through our application capturing the HTML source code from each page visited. Finally, it visits the AChecker tool online and validates each piece of HTML source code it collected and fails the build if any accessibility problems are found.
We have a specific build light that runs all the automated acceptance tests and accessibility tests. If any of these fail, the light goes red.
It’s much better if it looks like this:
It was fairly easy to use an existing free online accessibility checker to validate all HTML code in your local application, and make this status highly visible to the development team. By building accessibility in, we’re reducing the expected number of issues when we conduct more formal accessibility testing.
Bonus Points: faster accessibility feedback
Ideally, as a page is being developed, the developer/tester should be able to check accessibility (rather than waiting for the build to fail). The easiest way I have found behind a firewall is to use the WAVE Firefox extension, which displays errors as you use your site. Fantastic!
(Apple and Microsoft each have one known accessibility problem, Google has nine!)