Testing your iOS app’s share functionality

I’ve been testing an iOS app that I am working on, and since our app is not yet public, I discovered a neat way to test Facebook/Twitter iOS share functionality without publicly sharing anything about our app.

For Facebook, you can share with the audience set to “Only Me”, which will actually post it to Facebook but only you can see it. This way you can then see what it looks like once it is shared in your Facebook account.

facebook share

Twitter is a little trickier as there’s no per tweet privacy settings, so what I did was create a test Twitter account and made sure it is set to private (protected tweets) with zero followers. That way I can post to that however I like without the risk of someone actually seeing it.

twitter share

There’s two benefits of these little tricks: the first is that you can keep any test functionality or apps under wraps until they are actually released, and secondly, you won’t be spamming your networks with test messages as you test.

Offline apps

I recently saw a tweet to an old (2007) crudely named post by DHH on offline access:

“The idea of offline web applications is getting an undue amount of attention. Which is bizarre when you look at how availability of connectivity is ever increasing. EVDO cards, city-wide wifis, iPhones, Blackberry’s. There are so many ways to get online these days that the excitement for offline is truly puzzling.”

Six and a half years later this post was written by Team Hoodie about the idea of Offline First:

“Frequently not having any data connection in even the wealthiest and most developed cities of the world has led us to conclude that no, the mobile connectivity/bandwidth issue isn’t just going to solve itself on a global level anywhere in the near future.”

“We can’t keep building apps with the desktop mindset of permanent, fast connectivity, where a temporary disconnection or slow service is regarded as a problem and communicated as an error.”

I agree with Team Hoodie that offline is an often overlooked requirement: especially here in geographically sparse Australia where we get patchy mobile access, and lose all data connectivity as soon as we travel abroad (due to exorbitant roaming charges).

One of the biggest benefits I see in native mobile apps over HTML5 applications is their offline capabilities. For example, I love the TripAdvisor City Guides app which allows me to cache an entire copy of TripAdvisor for a city on my mobile device so I can use it when roaming abroad without any data connectivity or charges. To implement the same thing on their web site would not be trivial and/or may not even be possible. Offmaps is another hugely popular app that removes reliance to being online.

It’s easy to assume that the plane is the only place we don’t have connectivity (when in fact these days we probably do have some on there anyway), and I personally would love to embrace the offline first development philosophy.

Using appium in Ruby for iOS automated functional testing

As previously explained, I recently started on an iOS project and have spent a bit of time comparing iOS automation tools and chose Appium as the superior tool.

The things I really like about Appium is that it is language/framework agnostic as it uses the WebDriver standard WIRE protocol, it doesn’t require any modifications to your app, supports testing web views (also known as hybrid apps) and it supports Android since we are concurrently developing an Android application (it also supports OSX and Firefox OS but we aren’t developing for those, yet). There isn’t another iOS automated testing tool, that I know of, that ticks that many boxes for me.

Getting Started

The first thing to do is download the appium.app package from the appium website. I had an issue with the latest version (0.11.2) launching the server which can be resolved by opening the preferences and checking “Override existing sessions”.

You run the server from inside the appium.app which takes your commands and relays them to the iOS simulator. There’s also a very neat ‘inspector’ tool which shows you all the information you need to know about your app and how to identify elements.

Note: there’s currently a problem with XCode 5.0.1 (the latest version as I write) which means Instruments/UIAutomation won’t work at all. You’ll need to downgrade (uninstall/reinstall) to XCode 5.0 to get appium to work at all.

Two Ruby Approaches

This confused me a little to start, but there’s actually two vastly different ways to use appium in ruby.

1) Use the standard selenium-webdriver gem

If you’re used to using WebDriver, like me, this will be the most straightforward approach (this is the approach I have taken). Appium extends the API to add different gestures by calling execute_script from the driver, so all other commands stay the same (for example, find_element).

2) Use the appium_lib library

There is a Ruby gem appium_lib that has a different API to the selenium-webdriver gem to control appium. I don’t see any massive benefits to this approach besides having an API that is more specific to app testing.

Using Selenium-WebDriver to start appium in ruby

Launching an appium app is as simple as defining some capabilities with a path to your .app file you have generated using XCode (this gets put into a deep folder so you can write the location to a file and read it from that file).

capabilities = {
'browserName' => 'iOS',
'platform' => 'Mac',
'version' => '6.1',
'app' => appPath
}
driver = Selenium::WebDriver.for :remote,
desired_capabilities: capabilities,
url: "http://127.0.0.1:4723/wd/hub"

Locating elements

Once you’ve launched your app, you’ll be able to use the appium inspector to see element attributes you can use in appium. Name is a common attribute, and if you find that it’s not being shown, you can add a property AccessibilityIdentifier in your Objective C view code which will flow throw to appium. This makes for much more robust tests than relying on labels or xpath expressions.

driver.find_element(:name, "ourMap").displayed?

Enabling location services for appium testing

This got me stuck for a while as there’s quite a bit of conflicting information about appium on how to handle the location services dialog. Whilst you should be able to interact with it as a normal dialog in the latest version of appium, I would rather not see it at all, so I wrote a method to copy a plist file with location services enabled in it to the simulator at the beginning of the test run. It’s quite simple (you can manually copy the clients.plist after manually enabling location services):

def copy_location_services_authentication_to_sim
source = "#{File.expand_path(File.dirname(__FILE__))}/clients.plist"
destination = "#{File.expand_path('~')}/Library/Application Support/iPhone Simulator/7.0/Library/Caches/locationd"
FileUtils.cp_r(source, destination, :remove_destination => true)
end

Waiting during appium tests

This is exactly the same as selenium-webdriver. There’s an implicit wait, or you can explicitly wait like such:

driver.manage.timeouts.implicit_wait = 10
wait = Selenium::WebDriver::Wait.new :timeout => 30
wait.until {driver.find_element(:name, 'monkeys').displayed? }

Mobile gestures

The obvious difference between a desktop web browser and a mobile app is gestures. Appium adds gestures to WebDriver using execute_script. I recommend using the percentage method (0.5 etc) instead of pixel method as it is more resilient to UI change.

For example:

driver.execute_script 'mobile: tap', :x => 0.5, :y => 0.5

or

b = driver.find_element :name, 'Sign In'
driver.execute_script 'mobile: tap', :element => b.ref

Testing Embedded Web Views

The native and web views seamlessly combine so you can use the same find_element method to find either. The appium.app inspector displays the appropriate attributes.

Note: I can’t seem to be able to execute a gesture (eg. swipe) over a Web View. I don’t know whether this is a bug or a limitation of Appium.

Summary

I have found that using the familiar selenium-webdriver gem with appium has been very powerful and efficient. Being able to open an interactive prompt (pry or irb) and explore your app using the selenium-webdriver library and the appium.app inspector is very powerful as you can script on the fly. Whilst appium still seems relatively immature, it seems a very promising approach to iOS automation.

Now to get watir-webdriver to work with appium.

The current state of iOS automated functional testing

I’ve been working on an iOS project and have been looking for a suitable automated functional test library to drive the iOS GUI. I must say that automated iOS testing feels like automated web testing did about 10 years ago: lots of different tools taking different approaches, and all approaches quite flaky!

Through research I came across quite a few different tools that support iOS automated functional testing but need to work out which one is best.

Whilst I often advocate writing functional tests in the same language as your codebase, in the case of iOS and Objective C, I think using a more lightweight language like Ruby has its own advantages (I don’t really like Objective C).

The one thing I really dislike with a lot of these iOS functional test tools is how they are married to Cucumber and encourage users to write tests like “I tap button x” and “I scroll through list y”. These types of tests are much harder to read as they express implementation over intention and should be avoided.

I will also be writing tests for an Android app so being able to use the same tool-set is a great advantage to me.

I also personally prefer an approach where I don’t need to modify the core behavior of my app to run tests against it (designing for testability is completely different and vital). Some approaches embed some sort of server to receive automation commands, which the user must then remove later because it uses undocumented Apple APIs which will lead to app store rejection. Being able to run tests against the app that you submit to the app store means you can be more confident you have tested it right.

Finally, the iOS app I am working on using embedded web views to display dynamic content from the server, which can be interacted with, and therefore it is vital that these can be interacted with. This feature is actually very rare in an iOS automation framework.

Here’s the list of iOS functional automation tools I am aware of and how they stack up:

  • Tool: Frank
    • Language: Ruby
    • Test Framework: Also supports OSX apps
    • Supports other mobile plaforms: Also supports OSX apps
    • Approach: Requires you to embed a symbiote server in your app and uses undocumented APIs
    • Deploy same app to store: NO
    • Supports testing web-views?: NO (only via JavaScript calls)
  • Tool: KIF
    • Language: Objective C
    • Test Framework: OCUnit/SenTest
    • Supports other mobile plaforms: NO
    • Approach: Modifies your app to use undocumented APIs
    • Deploy same app to store: NO
    • Supports testing web-views?: NO
  • Tool: Subliminal
    • Language: Objective C
    • Test Framework: OCUnit/SenTest
    • Supports other mobile plaforms: NO
    • Approach: Embeds into your project but uses UIAutomation instead of undocumented APIs
    • Deploy same app to store: NO
    • Supports testing web-views?: NO
  • Tool: Zucchini
    • Language: Custom DSL (CoffeeScript/Ruby)
    • Test Framework: Custom
    • Supports other mobile plaforms:
    • Approach: Generates UIAutomation JavaScript that is executed against your app
    • Deploy same app to store: YES
    • Supports testing web-views?: NO
  • Tool: Calabash
    • Language: Ruby
    • Test Framework: Cucumber
    • Supports other mobile plaforms: Also supports Android Apps
    • Approach: Requires you to embed a server in your app to control it
    • Deploy same app to store: NO
    • Supports testing web-views?: YES
  • Tool: Appium
    • Language: Ruby, C#, Java, JavaScript, Objective C, PHP, Python, Perl, Clojure
    • Test Framework: Agnostic
    • Supports other mobile plaforms: Also supports Android
    • Approach: Uses Instruments to control app using the standard WebDriver WIRE protocol
    • Deploy same app to store: YES
    • Supports testing web-views?: YES
  • Tool: ios-driver
    • Language: Ruby, C#, Java, JavaScript, Objective C, PHP, Python, Perl, Clojure
    • Test Framework: Agnostic
    • Supports other mobile plaforms: NO
    • Approach: Uses Instruments to control app using the standard WebDriver WIRE protocol
    • Deploy same app to store: YES
    • Supports testing web-views?: YES

I’ll let you guess which tool I selected until I write my next blog post about how to get started in that tool.

Tips for testing iOS app accessibility using VoiceOver

I really enjoy testing iOS app accessibility using VoiceOver but for newbies it can be a little tricky to get started. Here’s some testing tips:

Use VoiceOver on a real iOS device to test accessibility

You can’t use VoiceOver on the iOS simulator, which is a good thing because it relies so much upon input gestures which can be mastered using a physical device, so you’ll need to test your app on a real iOS device. If you are creating an iPhone application and don’t have access to an iPhone, you can use a cheaper iPod touch for VoiceOver accessibility testing.

Use triple click home button to enable/disable VoiceOver

The first time you use VoiceOver is quite confusing as it basically entirely changes the way the operating system behaves. You can set an Accessibility Shortcut in the Accessibility menu of iOS so that triple click home toggles VoiceOver on/off. This is good for when you get stuck and you don’t need to navigate the menus with VoiceOver on to turn it off.

Triple Click Home VoiceOver iOS7

Master the gestures

VoiceOver has completely different gestures than standard gestures so you’ll want to practice them. The most common gesture is swipe/left right to select different elements which then require a double tap to activate (a standard tap). Two finger swipe up reads all elements from top of the screen and two finger swipe down reads from the bottom of the screen. There’s a useful guide to VoiceOver gestures available on the iOS developer site.

Use Screen Curtain

If you want to test your app is truly accessible then you can close your eyes, but if you are like me and might peek, use Screen Curtain (three finger triple tap) which blanks the screen entirely but leaves VoiceOver running so you using your app without a visual display. Neat.

Summary

The best way to do iOS accessibility is to dive in and get started. If you’re like me you’ll pick it up quickly and find it fun to do.

Tips for making your iOS app accessible

I’ve been doing some work recently on native iOS app accessibility and have started to see common issues appearing related to accessibility that can be resolved by using a common approach.

It’s important to note that accessibility is enabled by default on all iOS app development and if you don’t do anything crazy then your app should be mostly accessible, but it is still worth checking and keeping an eye out for common mistakes.

I do all my testing on a device using VoiceOver but I won’t detail that here, instead, I will write a separate blog post with some tips on VoiceOver testing.

Here’s some tips on making your iOS app accessible:

Enable form field tabbing

A VoiceOver user moves between elements using gestures, so on a form you should make it easy for a VoiceOver user to move between the fields by making a ‘return’ action on each field that moves to the next field, or submits the form on the last field. Apple goes above and beyond this functionality in its own native apps by putting accessible “Previous”, “Next”, and “Done” elements above the keyboard on forms which are recognized by VoiceOver and make it super easy to navigate and submit a form.

Apple Form Accessibility Example

Make embedded UIWebViews accessible

If you embed any HTML content in your native app then you should mark the UIWebView as an accessible element, but very importantly, don’t mark its parent as accessible, otherwise the UIWebView won’t be accessible via VoiceOver. Once you’ve done this, VoiceOver reads the content in the same way it does a web page in Safari.

Make UIPageControl page indicators accessible

When you have multiple horizontal pages in a UIPageControl, the page indicators are the dots that appear at the bottom of the control that indicate which page you are on as you swipe left/right through the pages. VoiceOver uses the swipe left/right gestures for navigation so a VoiceOver user won’t be able to switch between your pages unless they use the page indicators.

page indicators ios

Page indicators aren’t automatically accessible, you must do two things. First set the accessibility trait to UIAccessibilityTraitAdjustable and then implement a accessibilityIncrement and accessibilityDecrement which changes the pages. This means a VoiceOver user can focus on the element and use slider up/down from the top of the screen to navigate pages.

Ensure appropriate color contrast

This isn’t specific to VoiceOver testing but ensuring your application is accessible to visually impaired or color blind users. An example I have seen recently is a black ‘copy/paste’ popup on a black form.

black on black

Summary

Making accessible iOS isn’t difficult because Apple has done a lot of work to ensure accessibility is built into the development platform. The key is to build these features in as you go and continually test on a device using VoiceOver enabled.

More Information

There’s some great information here and here about iOS accessibility from Apple.