AWTA 2009 survey results

I conducted a survey for the Austin Workshop on Test Automation (AWTA) to see what people thought was good about the workshop and what could be improved in the future.  The response was very positive.

Whilst there were twenty-one questions, I believe the following two graphs tell the story:

How much fun did you have at AWTA 2009?
How much fun did you have at AWTA 2009?
Would you attend another AWTA?
Would you attend another AWTA?

The full results are available here. Bret also did a nice writeup of the AWTA 2009 proceedings here.

Easily define Watir tests in excel, OO, wikis and Google docs using Roo

I spent this evening playing with Roo, the ruby library for reading data from spreadsheets and I am very impressed. In a very small amount of time I was able to define tests in four different forms/places and could execute my tests from each of these:

  • An Excel file (.xls): stored locally
  • An OpenOffice (.ods): stored locally
  • An Excel file (.xls) stored in a Confluence wiki page with Confluence Office Connector; and
  • A Google Docs spreadsheet.

The great thing about Roo is that you don’t actually need Excel; Roo simply reads the file, unlike the ruby Excel COM WIN32 API I have used previously.

The spreadsheet (embedded in Confluence) looks like this:

excel-in-confluence

The cool thing about embedding it in Confluence is that you can click the title of the spreadsheet to edit it (in OpenOffice in my case).

I made some minor changes to my existing code that executed my depot tests from a wiki page, and it was as easy as that. A data driven Watir solution with four possible ways to define test cases. Cool.

You can find all the code needed below.


require 'watir'
require 'rubygems'
require 'roo'
require './Customer.rb'
require './Common.rb'

case ARGV[0]
when "excel"
	ss = Excel.new("watirmelon.xls")
when "wiki"
	ss = Excel.new("http://localhost:8080/download/attachments/2097153/watirmelon.xls")
when "gdocs"
	ss = Google.new("http://spreadsheets.google.com/ccc?key=pEcLrW3b2djraE8JF_2fJWA")
else
	ss = Openoffice.new("watirmelon.ods")
end

ss.default_sheet = ss.sheets.first
ss.first_row.upto(ss.last_row) do |line|
	if ss.cell(line,1).strip != "Function" then #We have an executable test
		begin
			module_name = ss.cell(line,1).strip
			method_name = ss.cell(line,2).downcase.strip.sub(' ','_') # automatically determine function name based upon method name.
			comments = ss.cell(line,3).strip
			expected_outcome = ss.cell(line,4).strip
			expected_error = ss.cell(line,5).strip
			required_module = Kernel.const_get(module_name)
			required_method = required_module.method(method_name)
			arity = required_method.arity() # this is how many arguments the method requires, it is negative if a 'catch all' is supplied.
			arity = ((arity * -1) - 1) if arity < 0
			parameters = []
			1.upto(arity) do |p|
				parameters.push(ss.cell(line,p+5))
			end
			actual_outcome, actual_output = required_method.call(*parameters)
			# determine the result.
			if (expected_outcome = 'Success') and actual_outcome then
			    result = "PASS"
			elsif (expected_outcome = 'Error') and (not actual_outcome) and (expected_error = actual_output) then
			    result = "PASS"
			else
			    result = "FAIL"
			end
			puts "\nRunning Test: #{method_name} for #{module_name}."
			puts "Expected Outcome: #{expected_outcome}."
			puts "Expected Error: #{expected_error}."
			puts "Actual Outcome: #{actual_outcome}."
			puts "Actual Output: #{actual_output}."
			puts "RESULT: #{result}"
		rescue
			puts "An error occurred: #{$!}"
		end
	end
end

See the full test code below the break.

Continue reading “Easily define Watir tests in excel, OO, wikis and Google docs using Roo”

Austin Workshop on Test Automation (AWTA) 2009

Watircraft are organising the Austin Workshop on Test Automation to be held on 16-18 January 2009 in Austin, Texas.

I have been approved to attend. It means three long flights from Australia (about 25 hours each way) but I am really looking forward to attending and meeting different people who are involved in Watir.

I haven’t been to America before either, so it should be really good.

Watir podcast eight

Episode eight of the Watir Podcast was released today in which Željko Filipin interviews me about my experience in using Watir, and also about my newly announced role as the Watir Wiki Master.

Check it out if you’re interested: http://watirpodcast.com/alister-scott/

Automated testing quick wins, low hanging fruit, breathing space & oxygen

I’ve seen a lot of automated testing efforts fail, and have also had to personally deal with the repercussions and expectations that have been set by these failed efforts.

For example, I clearly remember the first day on my new job that I moved 1500km to Brisbane for. I was being introduced to the Project Director whose first words to me were:

I have never seen automated testing succeed so I will be watching you very closely!”.

Not the best thing to hear on your first day in a job!

I’ve been thinking a fair amount about why automated testing fails to meet expectations. Sure, there is a lot of sales hype generated by test tool vendors and consultants, not a good thing, and there’s also  practitioners out there without the skills or discipline to deliver successful automated testing solutions, but there must be something else.

The problem is, I believe, that the time and effort to deliver a successful automated testing solution is huge. An automated testing framework might be deemed unsuccessful before it has even been given a chance to be successful! This is why I am a strong believer in first identifying some automated testing quick wins, some low hanging fruit, pardon the idiom.

A quick win is that something that requires a small amount of effort, input, for a huge amount of gain, output. These are sometimes hard to find, but almost always deliver a good outcome: some breathing space.

An example I can use is a simple application monitoring script. A place I worked had a problem with the availability of a public facing web app. Server monitoring wasn’t effective, the web/app server could be running fine but no one could log on via the web! There wasn’t a way to know when it was unavailable to users without first getting complaints via email and phone calls from unhappy people.

It only took me a few hours to quickly develop and test a ruby/watir automated testing script that I set to continuously run to monitor the web app availability. If the web app was unavailable, it would send off an email/sms to the people responsible for getting it running again.

The script was hugely successful. The downtime was reduced drastically, and since people saw patterns about when it was going down, it was easier to determine the cause of the availability problem. Since the script used only free/open source software (ruby and watir), there wasn’t any costs or time taken to acquire the software needed. People were like ‘wow: we didn’t know we could do this so easily’.

I recently attended the Test Automation Workshop on the Gold Coast in Australia and one presentation stuck in my mind. It was by a guy called Don Taylor who used to work in my team in Canberra and his presentation was called “Oxygen for Functional Automated Testing“. He told us that quite a few people emailed him when the program for the workshop went out asking: “What’s this tool called Oxygen?” But it wasn’t a tool at all, but rather about oxygen, the breathing space you need for successful automated testing.

And that’s what I consider to be the biggest output from these quick wins. It’s what automated testing needs to be successful. The breathing space generated from short term automated testing quick wins has enabled me to spend time and effort into creating robust automated testing frameworks that have been designed to be maintainable and successful in the long term. A whole heap of oxygen.

Automated testing SWOT analysis

I attended the Test Automation Workshop at Bond University on the Gold Coast (Australia) last week (LinkedIn Group). It was good to see what others in the field are doing and share my views on automated testing. In the final session, participants were asked to share their own SWOT analysis on automated testing as it currently stands. Here’s mine (remember, it’s my personal view only):

(S) Strengths

  • Testing community
  • Level of knowledge

(W) Weaknesses

  • Automated testing is WAY TOO COMPLEX: too much code, too many spreadsheets, too many system programming languages in use, too many vendorscripts
  • Requirements based testing has flaws (see my diagram)
Requirements Based Testing Venn Diagram
Requirements Based Testing Venn Diagram

(O) Opportunities

  • Open source testing tools growth (Watir, Selenium, FIT)
  • Use them at home, write about them! Share your knowledge.
  • Done well, automated testing gives you breathing space to do other things.

(T) Threats

  • Management’s expectations: replace manual testing, ‘codeless’ automated test frameworks. Instead focus on do better testing, do quicker testing.
  • Poor practitioners: give automated testing a bad name. (Possibly because they don’t have a personal development framework)
  • Bad metrics: don’t compare with something you wouldn’t have done anyway (eg. saved 10,000 hours of execution).  Metrics around bug counts.

If you disagree (or agree) with any of these leave a comment and let me know why!

The future of testers

Bret Pettichord wrote a thought provoking blog post today that raised some interesting questions about the future of testers in automated testing:

Will there be a role for testers in the future of automated testing? Will this role be defined by developers?

I agree with a lot that Bret has to say. With the increase of new, cheaper, more open and more integrated automated testing tools, I have noticed that developers are becoming increasingly interested in, and responsible for, automated tests. Whilst traditionally automated testing tools, especially ones that test GUIs, were the responsibility of a testing team during a testing phase, these new tools can easily be integrated into software development activities.

The benefits are quite obvious; developers start making their software more testable, and as their automated tests are run more frequently, they are more likely to be kept ‘alive’, and they find more defects early: increasing quality AND velocity.

But as Bret asks, what happens to the testers in all this? Those testers who once wrote and ran those automated tests.

Like Bret, I think that testers still will have a significant role in the future of automated testing. This is because I believe that a developer and a tester have two fundamentally different perspectives. I have found that developers often have a constructive perspective: focused on building something and making sure that it works, whereas a tester has an innate deconstructive perspective, trying to break the something or prove that it won’t work. These perspectives are shown by the fact that often testers design more comprehensive negative tests than developers.

But I don’t believe that having a different perspective will save a tester: it’s not that easy. I think, to survive, new testers need to adapt. And to adapt they need to be two things:

  • Technologically Savvy: Testers will need to be increasingly technical to keep up with developers who are also now doing automated testing. For example, a developer may write a neat XML API so that a tester can write tests more efficiently and effectively, but the tester will need to know how to do so.
  • Business Focused: With both developers and testers running automated tests, a key point of differentiation is business focus. Often developers write tests for specific functionality they have built, but a tester is often required to test multiple functions. When testing these multiple functions, the tester needs to be able to demonstrate to the business what is being tested. By focusing on business scenarios (or user stories) and using business terms, it is easier to link these tests back to the business and demonstrate value.

It’s great that so much effort has been put into deliberately making Watir user friendly. It has meant that it is easy for both developers AND testers to write tests.

One area of difference is deciding on how to implement a Watir framework, because I believe that some frameworks are developer focused (Rspec, Test::Unit) whereas others are more tester focused (Confluence & Rasta).

This is why I am looking forward to seeing future WatirCraft Watir offerings in the framework space. Because of the perspective thing I mentioned, I believe that it will be challenging to design a framework that suits these two different user groups.