Watir podcast eight

Episode eight of the Watir Podcast was released today in which Željko Filipin interviews me about my experience in using Watir, and also about my newly announced role as the Watir Wiki Master.

Check it out if you’re interested: http://watirpodcast.com/alister-scott/

Automated testing quick wins, low hanging fruit, breathing space & oxygen

I’ve seen a lot of automated testing efforts fail, and have also had to personally deal with the repercussions and expectations that have been set by these failed efforts.

For example, I clearly remember the first day on my new job that I moved 1500km to Brisbane for. I was being introduced to the Project Director whose first words to me were:

I have never seen automated testing succeed so I will be watching you very closely!”.

Not the best thing to hear on your first day in a job!

I’ve been thinking a fair amount about why automated testing fails to meet expectations. Sure, there is a lot of sales hype generated by test tool vendors and consultants, not a good thing, and there’s also  practitioners out there without the skills or discipline to deliver successful automated testing solutions, but there must be something else.

The problem is, I believe, that the time and effort to deliver a successful automated testing solution is huge. An automated testing framework might be deemed unsuccessful before it has even been given a chance to be successful! This is why I am a strong believer in first identifying some automated testing quick wins, some low hanging fruit, pardon the idiom.

A quick win is that something that requires a small amount of effort, input, for a huge amount of gain, output. These are sometimes hard to find, but almost always deliver a good outcome: some breathing space.

An example I can use is a simple application monitoring script. A place I worked had a problem with the availability of a public facing web app. Server monitoring wasn’t effective, the web/app server could be running fine but no one could log on via the web! There wasn’t a way to know when it was unavailable to users without first getting complaints via email and phone calls from unhappy people.

It only took me a few hours to quickly develop and test a ruby/watir automated testing script that I set to continuously run to monitor the web app availability. If the web app was unavailable, it would send off an email/sms to the people responsible for getting it running again.

The script was hugely successful. The downtime was reduced drastically, and since people saw patterns about when it was going down, it was easier to determine the cause of the availability problem. Since the script used only free/open source software (ruby and watir), there wasn’t any costs or time taken to acquire the software needed. People were like ‘wow: we didn’t know we could do this so easily’.

I recently attended the Test Automation Workshop on the Gold Coast in Australia and one presentation stuck in my mind. It was by a guy called Don Taylor who used to work in my team in Canberra and his presentation was called “Oxygen for Functional Automated Testing“. He told us that quite a few people emailed him when the program for the workshop went out asking: “What’s this tool called Oxygen?” But it wasn’t a tool at all, but rather about oxygen, the breathing space you need for successful automated testing.

And that’s what I consider to be the biggest output from these quick wins. It’s what automated testing needs to be successful. The breathing space generated from short term automated testing quick wins has enabled me to spend time and effort into creating robust automated testing frameworks that have been designed to be maintainable and successful in the long term. A whole heap of oxygen.

Automated testing SWOT analysis

I attended the Test Automation Workshop at Bond University on the Gold Coast (Australia) last week (LinkedIn Group). It was good to see what others in the field are doing and share my views on automated testing. In the final session, participants were asked to share their own SWOT analysis on automated testing as it currently stands. Here’s mine (remember, it’s my personal view only):

(S) Strengths

  • Testing community
  • Level of knowledge

(W) Weaknesses

  • Automated testing is WAY TOO COMPLEX: too much code, too many spreadsheets, too many system programming languages in use, too many vendorscripts
  • Requirements based testing has flaws (see my diagram)
Requirements Based Testing Venn Diagram
Requirements Based Testing Venn Diagram

(O) Opportunities

  • Open source testing tools growth (Watir, Selenium, FIT)
  • Use them at home, write about them! Share your knowledge.
  • Done well, automated testing gives you breathing space to do other things.

(T) Threats

  • Management’s expectations: replace manual testing, ‘codeless’ automated test frameworks. Instead focus on do better testing, do quicker testing.
  • Poor practitioners: give automated testing a bad name. (Possibly because they don’t have a personal development framework)
  • Bad metrics: don’t compare with something you wouldn’t have done anyway (eg. saved 10,000 hours of execution).  Metrics around bug counts.

If you disagree (or agree) with any of these leave a comment and let me know why!

The future of testers

Bret Pettichord wrote a thought provoking blog post today that raised some interesting questions about the future of testers in automated testing:

Will there be a role for testers in the future of automated testing? Will this role be defined by developers?

I agree with a lot that Bret has to say. With the increase of new, cheaper, more open and more integrated automated testing tools, I have noticed that developers are becoming increasingly interested in, and responsible for, automated tests. Whilst traditionally automated testing tools, especially ones that test GUIs, were the responsibility of a testing team during a testing phase, these new tools can easily be integrated into software development activities.

The benefits are quite obvious; developers start making their software more testable, and as their automated tests are run more frequently, they are more likely to be kept ‘alive’, and they find more defects early: increasing quality AND velocity.

But as Bret asks, what happens to the testers in all this? Those testers who once wrote and ran those automated tests.

Like Bret, I think that testers still will have a significant role in the future of automated testing. This is because I believe that a developer and a tester have two fundamentally different perspectives. I have found that developers often have a constructive perspective: focused on building something and making sure that it works, whereas a tester has an innate deconstructive perspective, trying to break the something or prove that it won’t work. These perspectives are shown by the fact that often testers design more comprehensive negative tests than developers.

But I don’t believe that having a different perspective will save a tester: it’s not that easy. I think, to survive, new testers need to adapt. And to adapt they need to be two things:

  • Technologically Savvy: Testers will need to be increasingly technical to keep up with developers who are also now doing automated testing. For example, a developer may write a neat XML API so that a tester can write tests more efficiently and effectively, but the tester will need to know how to do so.
  • Business Focused: With both developers and testers running automated tests, a key point of differentiation is business focus. Often developers write tests for specific functionality they have built, but a tester is often required to test multiple functions. When testing these multiple functions, the tester needs to be able to demonstrate to the business what is being tested. By focusing on business scenarios (or user stories) and using business terms, it is easier to link these tests back to the business and demonstrate value.

It’s great that so much effort has been put into deliberately making Watir user friendly. It has meant that it is easy for both developers AND testers to write tests.

One area of difference is deciding on how to implement a Watir framework, because I believe that some frameworks are developer focused (Rspec, Test::Unit) whereas others are more tester focused (Confluence & Rasta).

This is why I am looking forward to seeing future WatirCraft Watir offerings in the framework space. Because of the perspective thing I mentioned, I believe that it will be challenging to design a framework that suits these two different user groups.

Automated testing and saving money

I am embarrassed when I hear test tool sales people talk about how much money automated testing can save your organisations. I have heard them rattle off figures like ‘it’ll save you 85% of testing effort‘ and ‘it will reduce the number of manual testers that you need to employ‘.

These statements are wrong and contradict many of the lessons learned in software testing, including “Lesson 102: Speed the development process instead of trying to save a few dollars on testing”, and “Lesson 108: Don’t equate manual testing to automated testing”.

Because Watir is not a commercial tool and there are no up front licensing costs, I suppose that a return on investment doesn’t need to be justified in quite the same way. I do understand there are maintenance costs of Watir (time and effort), but if you incrementally implement Watir sensibly you can easily show the benefits as you go.

One of the best things I have heard someone say about automated testing is that it is one of the only things that can increase both quality and velocity.

For example, making sure that every line of code checked in has been peer reviewed may indeed increase quality but it may also severely impact velocity.

By contrast, doing daily builds may increase velocity, but it may also decrease quality if each build isn’t tested properly.

So, what the test tool vendors need to really say is that, automated testing, done well, can increase your quality and velocity. And while they’re at it, they may as well mention that Watir rivals many of the proprietary offerings, plus it has no licensing costs.

Five reasons starting with F on why I use Watir

Bret Pettichord is working on a business case for Watir and has asked the Watir community for reasons on why they use it.

Here’s my ordered list of five reasons I use Watir:

1. It’s Free (as in beer)

Watir being free (as in beer) straight away makes it a very attractive test tool. This has meant that automated testing has ‘gotten in the door’ on projects where commercial automated testing tools would not have been looked at in the first place.

Once Watir’s benefits are evident, more and more team members want to use it. It is then a simple, quick install on other’s machines. The process for purchasing and generating commercial test tool licenses is often very lengthy and time consuming which means it ultimately won’t be running on as many machines as Watir will be.

2. It’s Free (as in freedom)

Because Watir is open source and uses a modern object oriented scripting language, it provides its users with the freedom to tailor it to how they want to use it. Nothing is hidden and mysterious so users can often solve their own problems without consulting others or vendor support.

3. It’s Flexible

Watir is a very flexible ruby library that supports many scripting tasks. The main reason I have used it is to conduct automated regression testing but I have also used it to create test data in systems, schedule and run automated web site monitoring scripts (complete with alerts), as well as one-off scripts that are quick to write and get the job done.

4. It’s Fast

Execution speed is as good as Quick Test Professional. The in built browser synchronisation is better than Quick Test Professional.

The ramp up time for users to learn ruby and watir is very fast compared to languages like TSL (WinRunner), SQA-Basic (RR) and Java (RFT).

The installation and implementation of Watir is fast, easy and lightweight. There are no server components to install like Selenium.

5. It’s Fun

Ruby is a unique programming language in that it has been designed to be fun and you can get better at using it every day. I love the ‘Ah-ha’ moments when using Ruby where you realise that you can do something just a little bit neater and more efficient.

Business Driven Testing

Watir is a great library, but to use it to its full potential you need to create your own framework to drive it.

As with any context driven approach, the framework/solution you decide to implement has to suit your own environment. One approach that I have used successfully in multiple projects (with tweaks for each) is the business driven approach.

All the software projects I have worked on have had one main purpose, that is to support a business need. This business need may be to allow people to easily travel from country to country, or in contrast, allow enthusiasts to efficiently buy books online. Creating a test suite that is focussed around these real business activities will clarify the link from these to the the user tests you are writing and running. This also compliments a risk based testing approach as it is often easy to get business people to express the important and risky areas of the business, this being what will be tested first.

The concept of my framework is that the functionality of the software is divided into functions, each of which has user tests defined and scripted. These user tests can be easily strung together to test entire business scenarios, or soap opera scenarios if you are so inclined.

The first thing to do is split the areas of the software up into functions. You can then define user tests for each function. User tests are something that a user will do or currently does. They have to be real world. They usually consist of some inputs (data), some processing and an outcome. If the outcome is positive, usually there is output, for example, an ID. If the outcome is negative, usually there is an error message.

Performing this activity for the Depot Rails Application creates something like:

Function: Customer
User Tests: Empty Cart, Add Book to Cart, Check Out

Function: Administration
User Tests: Log On, Ship All Book Orders, Ship Some Book Orders, Add User, Delete User

This activity can be done for any kind of requirements and even if requirements don’t exist. You can do this from a functional requirements specification, use cases/user stories (easier), and you can do this (albeit less easily) from an existing system interface or prototype.

Your functions then become modules of Watir code, and the user tests become methods in these modules. Each method takes some data, and returns an outcome (it worked or it didn’t) and optionally an error or an output value.

For example, this is an empty Customer module with just the methods defined.


module Customer
   URL = 'http://localhost:3000/store/'

   def Customer.add_book(book_title)

   def Customer.check_out(customer_name, customer_email, customer_address, customer_payment_method)

  def Customer.empty_cart

For efficiency and usability, I always ensure that each user test is home based, meaning that it starts and finishes on the same home page/screen. This avoids ‘state’ problems occurring, so a series of user tests can be called to test a business scenario without worrying about where each user test starts and ends. This also avoids repetitious and inefficient startup/logging in/logging out for each user test. Don’t laugh, I have often seen this done.Your user tests should accommodate positive and negative tests in the same method. For example, our ‘Customer.check_out(…)’ user test should be able to test both valid and invalid credit card numbers. This is done by making the method return the outcome and then determing the result outside the method depending on your expected outcome.For example, although the method may return an error, we could be expecting this in our negative test so therefore our test actually passes.I have seen many people write specific methods to test specific error messages. Don’t be tempted to write ‘Customer.check_out_valid_card(…)’ and ‘Customer.check_out_invalid_card(…)’. This leads to an unmaintainable set of user tests due to the repetition of code required. Limiting the number of methods also makes it easier to define business scenarios as there are a limited number of methods for a business scenario tester to choose from.
Once you have defined modules and methods you need to define the business scenarios which involve running user stories providing data (positive and negative) as well as expected outcomes. It is best to use a data presentation language to do this.

Excel is very common data presentation language for designing user tests and business scenarios but versioning excel files can be problematic due to their binary nature.

A wiki is excellent for defining user tests and business scenarios in wiki tables as wikis have in built versioning, a centralised, accessible and flexible nature, and are generally easy to use.

In the coming weeks I will discuss how to set up a wiki page in Confluence as a business scenario which includes a series of user tests and then how to dynamically call the ruby methods and determine the test results from the method outcomes.