GTAC 2013 Day Two Doggy Bag

The main theme for today’s talks was Android UI automation with various approaches demonstrated.

Jonathan Lipps from Sauce Labs
Jonathan Lipps from Sauce Labs

Mark Trostler from Google started with a technical talk on JavaScript testability. He emphasized using interfaces over implementations which means you can change the implementation whilst still testing the interface. He concluded by emphasizing writing tests first naturally results in testable code.

Thomas Knych, Stefan Ramsauer and Valera Zakharov from Google gave a highly entertaining presentation about Android testing at scale. This was one of my favorite talks of the conference. They highlighted that insistence on automated testing using real devices is inefficient and problematic, and that you should first run a majority of tests on emulators which finds a majority of the bugs. This is something I have been saying for a long time and it was refreshing to hear it from a Google Android team. Ways to speed up Android emulators include using snapshots for fast restores, as well as using x86 accelerated AVDs. Interestingly, the Google Android team ran 82 million Android automated tests using emulators in March alone (there are approx 2.5 million seconds in March) with only 0.15% of tests being categorized as flaky. This is partly due to using a Google only automated testing tool for Android called Espresso. Another key takeaway was if you are using physical devices then don’t glue them to a wall or whiteboard. The devices get hot, melt the glue and get damaged as they hit the floor.

Guang Zhu (朱光) and Adam Momtaz also from Google talked about some historical approaches to Android automation (instrumental, image recognition and hierarchy viewer) and how to use features in newer Android API versions (16+) to automate tests reliably.

Jonathan Lipps from Sauce Labs demonstrated the very impressive tool Appium which enables iOS and Android automation using WebDriver bindings allowing you to use your language of choice with the promise to write once and run across the two platforms. This isn’t exactly true as the selectors will be different but these can be defined in a module so your test code is readable. Jonathan explained the philosophy behind the tool and even demonstrated a quick demo running against the new FirefoxOS to demonstrate its flexibility. Some of the limitations mentioned were you can only run one iOS emulator per physical Apple Mac which limits continuous integration scalability. It was overall a very impressive polished tool.

Eduardo Bravo from the Google+ team gave an interesting lightning talk about hands-on experience in testing Google+ apps across Android and iOS. They use KIF for iOS testing. Eduardo was quote worthy with such gems as “flaky tests are worse than no tests” and “don’t give devs a reason not to write tests“. The hermetic theme was recurrent with the ongoing endeavor to reduce flakiness by using hermetic environments with known canned responses to make tests deterministic. A very enjoyable talk.

Valera Zakharov from the Google Android dev team discussed an internal tool Espresso which makes Android tests much more efficient and reliable, and with less boilerplate code. My only complaint: don’t demo an awesome tool that isn’t open source and available for others to use.

Michael Klepikov from Google talked about using the upcoming ChromeDriver 2 server to access performance metrics from the Chrome Developer Tools. He demonstrated some fancy looking results generated by I don’t believe you need ChromeDriver 2 to do this though, the W3C navigation timing spec provides performance metrics right now.

Yvette Nameth and Brendan Dhein from the Google Maps team discussed the challenge of testing large Google Maps datasets, demonstrating a risk based approach: eg. Ensuring the Eiffel Tower is accurate is important, but the accuracy of your Gran’s farm is not.

Celal Ziftci and Vivek Ramavajjala from the University of San Diego presented their findings of work at Google to automatically find culprits in failing builds. This was a highly interesting talk about creating a tool to analyze multiple change sets in a build and work out which is most suspicious using a couple of heuristics: number of files changed and distance from root. The tool originally took 6 hours to perform an analysis but they reduced this to 2-3 minutes using extensive caching. The tool they developed allows extensible heuristics to allow additional intelligence such as keyword analysis.

Katerina Goseva-Popstojanova talked about academic analysis of software product line quality. She highlighted that open source software projects are the Promised Land for academia in that the code is fully accessible and can be used for academic analysis and research.

Claudio Criscione from Google discussed Cross Site Scripting (XSS) vulnerabilities and some automated solutions to checking for these.

During the afternoon I went for a tour of the Google New York City office here in Chelsea. All I can say is wow. The view from the 11th floor roof top balcony was very nice too (see pics below).

Google NYC Balcony

Google NYC View

A very enjoyable and smooth conference and well done to all involved organizing it.

Author: Alister Scott

Alister is an Excellence Wrangler for Automattic.

4 thoughts on “GTAC 2013 Day Two Doggy Bag”

  1. Appium is nice tool in progress. I like it’s philosphy/design. It could be adapted to offer things like desktop UI automation via WebDriver API rather than proprietary/fix tool/language.


  2. Actually we were planning on releasing Espresso & some of the other tools at the conference… Unfortunately there were some process roadblocks we need to route around first. :( Soon…. very soon!


  3. WRT ChromeDriver and performance metrics… A few things:

    – You can absolutely collect Nav Timings, or in newer/future browsers Resource Timings, Nav Timings-2, User Timings, but:

    a) They are not nearly as rich and deep as the WebKit Inspector profile. They roughly correspond to what you get from the Network + Page events, but Timeline gives you so much more.

    b) They are page load oriented, so if a testcase goes through several pages, e.g. submits a form, the previous page’s Nav Timings get lost, and finding a way to grab Nav Timings reliably for of the traversed pages is not as easy as just doing something before/after each testcase, transparently to the test itself.

    – People have done a side connection to DevTools (or Firefox debugger) while a test runs, a long time ago. Doesn’t matter if it’s a WebDriver test or not, and this is how e.g. WebPageTest’s NodeJS agent does it now. That was not the point of the talk/demo (that method I demo’d at the last year’s Selenium Conference BTW:)

    The power of *integrating* performance instrumentation into WebDriver itself as part of its standard Logging API, is letting people/organizations very easily enable performance measurements in their existing WebDriver tests that run as part of an existing continuous integration system. WD Logging API is a lot easier to use than a side connection to DevTools, and definitely easier than integrating e.g. WebPageTest-proper latency tests into CI, especially for integration tests that launch a server as part of the test. In the demo I used WebPageTest only to upload and visualize a test result, WebPageTest itself did not run the test — here the WebDriver test would run as part of whatever existing custom toolchain.

    Exposing the browser’s detailed instrumentation via the WebDriver Logging API is currently implemented in chromedriver2 only. I certainly hope that other WD implementations do that as well, and it’s definitely possible technically, just a matter of someone hooking it up.

    The hope is that now performance instrumentation is so easy to enable for existing functional/integration WebDriver tests that run as part of existing CI, there is no excuse not to do that. Again, pro tips — run multiple iterations for statistical validity, and use JS API console.time, timeEnd, timeStamp that inject Timeline events and then extract duration of arbitrary custom intervals on the page.


Comments are closed.