Internationalization and Localization Testing

Internationalization vs Localization

Internationalization is the process of making a software application easily adaptable to be used by international audiences. Localization is the process of adding a new locale to a software application.

Internationalization is most often implemented by ensuring that labels and values on screens are not ‘hard-coded’, they are read from a common source to ensure they can be easily switched when running in a different locale. Localization is adding a new source, or locale, so that a new audience can use your application.

I have found the most effective way to test that an application is internationalized is to create a new locale specifically for testing purposes. Your new locale has a specific known formula for translation which you use to check that you application is fully internationalized.

You should then also run all your automated acceptance tests in your new locale to ensure that all new functionality is internationalized as it developed.

I’d recommend either of the following approaches to define a new locale for testing:

Approach Description Welcome’ Translated Pros Cons
Lorem Ipsum A defined, fixed length string Lorem ipsum dolor sit amet’
Can see how short elements overflow
Easy to test
Screens don’t look as realistic
Screens can be hard to understand
Reversal Reversal of the same string emocleW Screens look realistic
Easy to test
Can’t see overflow effects

I prefer the reversal approach as it’s the most realistic representation, and it’s easy to do some exploratory testing on a different ‘real’ locale to spot overflow/formatting issues.

Say you have a screen that looks something like this in English:

English Screen

This same screen in Lorem Ipsum would look something like:

Lorem Screen

And it would look like this in Reverse:

Reverse English Screen

Manually testing internationalization

This is easy. Run your app in whatever test locale you have decided to create. Then ensure that each screen has localized data for each label.

Automatically testing internationalization

You can fairly easy ensure your automated acceptance tests can run against your test locale. You just need to ensure that all labels which are asserted can be translated in your acceptance tests.

For example:

Given I am an anonymous user
When I visit the personal details screen
Then I should see the heading “Please enter your details”
And I should see the label “Name:”
And I should see the label “Email:”
And I should see the label “Phone:”
And I should see the button with label “Continue”

Each of these steps has a specific label that needs to be translated. Instead of asserting the actual value, in your automated test code you first translate the string to your test locale, and then assert that value. Your automated tests will then fail if the actual string appears on the screen untranslated.

Don’t bury your hooks

A slightly technical post here.

If you’re using a BDD framework such as Specflow or JBehave, can I please ask that you don’t bury your hooks. These frameworks provide hooks, or events, you can use to repeat certain things. An example in Specflow is:

[BeforeScenario]
public static void GoToHomePage()
{
Driver.Navigate().GoToUrl(GoogleUrl);
}

You can put these hooks in any steps class, but please, put them in one place, all together, preferably in a class/file named hooks or the like.

I recently came across a code base where these hooks where spread across lots of different steps files, which makes it very confusing and hard to debug as you don’t know where all this code is being called from when you’re running your tests.

Stop Firefox auto-updating (and breaking your CI build)

If you run regular automated WebDriver tests using Firefox then chances are you’ve encountered the problem where Firefox updates itself and your WebDriver bindings don’t support that version which causes all your tests to fail. It’s really annoying, so I suggest you set your Firefox install to never automatically update as below to avoid this happening.

Stop Firefox auto updating

Update:

As Alex points out below, you can do this programmatically for a server you don’t actually launch Firefox on, ie. a headless machine.

profile = Selenium::WebDriver::Firefox::Profile.new
 # disable autoupdate
 profile['app.update.auto'] = false
 profile['app.update.enabled'] = false

Refactoring legacy code using automated acceptance tests

A question I often get asked as a consultant is how to maintain a legacy code base that has very little or no unit tests, or any automated tests for that matter. How do you start to refactor the code and introduce unit tests?

The problem with immediately adding unit tests to a unit test free legacy code base is the code won’t be testable (as it wasn’t test-driven), and it will require refactoring to make it so. But refactoring legacy code without unit tests as a safety net is risky and can easily stop your application and introduce bugs, so you’re in a catch-22.

A good solution to this problem is to write a few specific automated acceptance tests for the feature you would like to add unit tests to, before you write any unit tests.

For example, imagine you would like to add some unit tests to the registration feature of your application, you might automate four acceptance test scenarios:

Scenario One: User can register using valid credentials
Scenario Two: User can’t register using existing email address
Scenario Three: User can’t register without providing mandatory data
Scenario Four: User can’t register from a non-supported country

Before you make any changes to your code base, you must write and execute these scenarios until they are all passing.

Once they are all passing then you can start writing unit tests one by one until each passes. As you write unit tests and refactor your code as required, you should run your specific acceptance tests to ensure the high level functionality hasn’t been broken by your changes.

Once you have completed your unit tests you will have a set of unit tests and automated regression tests with code that supports both.

You can repeat this for each feature in your system, and slowly combine a few key scenarios into an end-to-end scenario that replicates a common user-journey throughout your application.

At the end of this process you will have cleaner code, a set of unit tests, a set of feature based acceptance tests and one or two end-to-end user journeys that you can execute against any change you make.

Paying down technical debt is hard, but following a methodological approach using automated acceptance tests as a safety net makes it practical to do so.

Is test management wrong?

I was somewhat confused by what was meant by the recent article entitled “Test Management is Wrong“. I couldn’t quite work out whether the author meant Test Management (the activity) is wrong, Test Managers (the people) are wrong or Test Management Tools (the things) are wrong, but here’s my view of these three things:

Test Management (the activity): now embedded in agile teams;
Test Managers (the people): on the way out; and
Test Management Tools (the things): gathering dust

Let me explain with an example. Most organizations see the benefit of agile ‘iterative’ development and have or are in the process of restructuring teams to work in this way. A typical transformation looks like this:

Agile Transformation

Instead of having three separate larger ‘analysis’, ‘development’ and ‘test’ teams, the organization may move to four smaller cross functional teams consisting of say one tech lead, one analyst, one tester and four programmers.

Previously a test manager managed the testing process (and testing team) probably using a test management tool such as Quality Centre.

Now, each agile team is responsible for its own quality, the tester advocates quality and encourages activities that build quality in such as accurate acceptance criteria, unit testing, automated acceptance testing, story testing and exploratory testing. These activities aren’t managed in a test management tool, but against each user story in a lightweight story management tool (such as Trello). The tester is responsible for managing his/her own testing.

Business value is defined and measured an iteration at a time by the team.

So what happens to the Analysis, Development and Test Managers in the previous structure? Depending on the size of the organization, there may be a need for a ‘center of excellent’ or ‘community practice’ in each of the areas to ensure that new ideas and approaches are seeded across the cross-functional teams. The Test Manager may be responsible for working with each tester in the teams to ensure this happens. But depending on the organization and the testers, this might not be needed. This is the same for the Analysis Manager, and to a lesser extent, the Development Manager.

Step by Step test cases (such as those in Quality Center) are no longer needed as each user story has acceptance criteria, and each team writes automated acceptance tests written for functionality they develop which acts as both automated regression tests and living documentation.

So the answer the author’s original message: no I don’t think test management is wrong, we just do it in a different way now.

Automated local accessibility testing using WAVE and WebDriver

I wrote a while back about automated WCAG 2.0 accessibility testing in our build pipeline.

My solution involved capturing the HTML of each unique page in our application visited via WebDriver, then navigating to the online WAVE tool to check the accessibility of each chunk of HTML, all done during a dedicated accessibility stage of our build pipeline.

That was working pretty well until I noticed our IP address was starting to become blocked by WAVE due to a unreasonable number of checks… Doh!

Locally we’ve all been using the Firefox WAVE toolbar, as it’s very easy to run against a locally running instance of our application. Which made me think: how do I automate the Firefox WAVE toolbar to avoid having to use the WAVE web site alltogether?

Today I changed our accessibility tests to use the WAVE toolbar. It was a bit of a pain since we use Windows and the WAVE toolbar seems to be broken in configuring Firefox shortcut keys to trigger accessibility checks.

What I ended up doing is firing the keystrokes needed to run the check via the Tools menu in Firefox. I then check the DOM to ensure there are no WAVE errors and screenshot it and fail the test if there are. Quite simple really, and it works very reliably and quickly.

Makes me think I should have done this to start with, but a good outcome nonetheless.

Some C# WebDriver code to do this follows. This should be easy to adapt to another language.

using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
using System.Drawing.Imaging;

namespace Dominos.OLO.Html5.Tests.Acceptance
{
    [TestClass]
    public class AccessibilityTests
    {
        [TestMethod]
        public void AccessibilityTest()
        {
            var profile = new FirefoxProfile();
            profile.AddExtension("wavetoolbar1_1_8_en.xpi");
            var driver = new FirefoxDriver(profile);

            driver.Navigate().GoToUrl("http://apple.com");

            var b = driver.FindElement(By.TagName("body"));
            b.SendKeys(Keys.Alt + "T");
            b.SendKeys(Keys.ArrowDown);
            b.SendKeys(Keys.ArrowDown);
            b.SendKeys(Keys.ArrowDown);
            b.SendKeys(Keys.ArrowDown);
            b.SendKeys(Keys.ArrowRight);
            b.SendKeys(Keys.Enter);

            var waveTips = driver.FindElements(By.ClassName("wave4tip"));
            if (waveTips.Count == 0) Assert.Fail("Could not locate any WAVE validations - please ensure that WAVE is installed correctly");
            foreach (var waveTip in waveTips)
            {
                if (!waveTip.GetAttribute("alt").StartsWith("ERROR:")) continue;
                var fileName = String.Format("{0}{1}{2}", "WAVE", DateTime.Now.ToString("HHmmss"), ".png");
                var screenShot = ((ITakesScreenshot)driver).GetScreenshot();
                screenShot.SaveAsFile(fileName, ImageFormat.Png);
                Assert.Fail("WAVE errors were found on the page. Please see screenshot for details");
            }
        }
    }
}

and finally the resulting screenshot:

Apple WAVE check results