Internationalization and Localization Testing

Internationalization vs Localization

Internationalization is the process of making a software application easily adaptable to be used by international audiences. Localization is the process of adding a new locale to a software application.

Internationalization is most often implemented by ensuring that labels and values on screens are not ‘hard-coded’, they are read from a common source to ensure they can be easily switched when running in a different locale. Localization is adding a new source, or locale, so that a new audience can use your application.

I have found the most effective way to test that an application is internationalized is to create a new locale specifically for testing purposes. Your new locale has a specific known formula for translation which you use to check that you application is fully internationalized.

You should then also run all your automated acceptance tests in your new locale to ensure that all new functionality is internationalized as it developed.

I’d recommend either of the following approaches to define a new locale for testing:

Approach Description Welcome’ Translated Pros Cons
Lorem Ipsum A defined, fixed length string Lorem ipsum dolor sit amet’
Can see how short elements overflow
Easy to test
Screens don’t look as realistic
Screens can be hard to understand
Reversal Reversal of the same string emocleW Screens look realistic
Easy to test
Can’t see overflow effects

I prefer the reversal approach as it’s the most realistic representation, and it’s easy to do some exploratory testing on a different ‘real’ locale to spot overflow/formatting issues.

Say you have a screen that looks something like this in English:

English Screen

This same screen in Lorem Ipsum would look something like:

Lorem Screen

And it would look like this in Reverse:

Reverse English Screen

Manually testing internationalization

This is easy. Run your app in whatever test locale you have decided to create. Then ensure that each screen has localized data for each label.

Automatically testing internationalization

You can fairly easy ensure your automated acceptance tests can run against your test locale. You just need to ensure that all labels which are asserted can be translated in your acceptance tests.

For example:

Given I am an anonymous user
When I visit the personal details screen
Then I should see the heading “Please enter your details”
And I should see the label “Name:”
And I should see the label “Email:”
And I should see the label “Phone:”
And I should see the button with label “Continue”

Each of these steps has a specific label that needs to be translated. Instead of asserting the actual value, in your automated test code you first translate the string to your test locale, and then assert that value. Your automated tests will then fail if the actual string appears on the screen untranslated.

Don’t bury your hooks

A slightly technical post here.

If you’re using a BDD framework such as Specflow or JBehave, can I please ask that you don’t bury your hooks. These frameworks provide hooks, or events, you can use to repeat certain things. An example in Specflow is:

[BeforeScenario]
public static void GoToHomePage()
{
Driver.Navigate().GoToUrl(GoogleUrl);
}

You can put these hooks in any steps class, but please, put them in one place, all together, preferably in a class/file named hooks or the like.

I recently came across a code base where these hooks where spread across lots of different steps files, which makes it very confusing and hard to debug as you don’t know where all this code is being called from when you’re running your tests.

Refactoring legacy code using automated acceptance tests

A question I often get asked as a consultant is how to maintain a legacy code base that has very little or no unit tests, or any automated tests for that matter. How do you start to refactor the code and introduce unit tests?

The problem with immediately adding unit tests to a unit test free legacy code base is the code won’t be testable (as it wasn’t test-driven), and it will require refactoring to make it so. But refactoring legacy code without unit tests as a safety net is risky and can easily stop your application and introduce bugs, so you’re in a catch-22.

A good solution to this problem is to write a few specific automated acceptance tests for the feature you would like to add unit tests to, before you write any unit tests.

For example, imagine you would like to add some unit tests to the registration feature of your application, you might automate four acceptance test scenarios:

Scenario One: User can register using valid credentials
Scenario Two: User can’t register using existing email address
Scenario Three: User can’t register without providing mandatory data
Scenario Four: User can’t register from a non-supported country

Before you make any changes to your code base, you must write and execute these scenarios until they are all passing.

Once they are all passing then you can start writing unit tests one by one until each passes. As you write unit tests and refactor your code as required, you should run your specific acceptance tests to ensure the high level functionality hasn’t been broken by your changes.

Once you have completed your unit tests you will have a set of unit tests and automated regression tests with code that supports both.

You can repeat this for each feature in your system, and slowly combine a few key scenarios into an end-to-end scenario that replicates a common user-journey throughout your application.

At the end of this process you will have cleaner code, a set of unit tests, a set of feature based acceptance tests and one or two end-to-end user journeys that you can execute against any change you make.

Paying down technical debt is hard, but following a methodological approach using automated acceptance tests as a safety net makes it practical to do so.

Is test management wrong?

I was somewhat confused by what was meant by the recent article entitled “Test Management is Wrong“. I couldn’t quite work out whether the author meant Test Management (the activity) is wrong, Test Managers (the people) are wrong or Test Management Tools (the things) are wrong, but here’s my view of these three things:

Test Management (the activity): now embedded in agile teams;
Test Managers (the people): on the way out; and
Test Management Tools (the things): gathering dust

Let me explain with an example. Most organizations see the benefit of agile ‘iterative’ development and have or are in the process of restructuring teams to work in this way. A typical transformation looks like this:

Agile Transformation

Instead of having three separate larger ‘analysis’, ‘development’ and ‘test’ teams, the organization may move to four smaller cross functional teams consisting of say one tech lead, one analyst, one tester and four programmers.

Previously a test manager managed the testing process (and testing team) probably using a test management tool such as Quality Centre.

Now, each agile team is responsible for its own quality, the tester advocates quality and encourages activities that build quality in such as accurate acceptance criteria, unit testing, automated acceptance testing, story testing and exploratory testing. These activities aren’t managed in a test management tool, but against each user story in a lightweight story management tool (such as Trello). The tester is responsible for managing his/her own testing.

Business value is defined and measured an iteration at a time by the team.

So what happens to the Analysis, Development and Test Managers in the previous structure? Depending on the size of the organization, there may be a need for a ‘center of excellent’ or ‘community practice’ in each of the areas to ensure that new ideas and approaches are seeded across the cross-functional teams. The Test Manager may be responsible for working with each tester in the teams to ensure this happens. But depending on the organization and the testers, this might not be needed. This is the same for the Analysis Manager, and to a lesser extent, the Development Manager.

Step by Step test cases (such as those in Quality Center) are no longer needed as each user story has acceptance criteria, and each team writes automated acceptance tests written for functionality they develop which acts as both automated regression tests and living documentation.

So the answer the author’s original message: no I don’t think test management is wrong, we just do it in a different way now.