Disgraceful degradation

Old browsers are a headache for websites: to develop for, to test, you name it, they’re nothing but bad news.

Thankfully modern browsers like Google Chrome, Apple Safari & Mozilla Firefox are not only more open standards compliant, but are generally automatically updated so there’s going to be a lot less legacy versions in the wild.

There’s two techniques I am familiar with to cater for older browsers: graceful degradation and progressive enhancement. Essentially these achieve the same outcome, sites that still work on older browsers, but are different approaches to the same problem.

Graceful degradation is building a site that is optimized for modern browsers, but then adding functionality to gracefully degrade (but still function) when accessed via an older browser.

Progressive enhancement is building a simple site that functions then adding enhancements that work on modern browser technology when it’s available.

Whilst these achieve similar outcomes, I believe we’ve got to a tipping point where a lot of people use non-Microsoft non-legacy browsers so I believe graceful degradation is a better bet in these times.

At work today, we noticed a problem where a Google font wasn’t loading on a dev machine, and our site rendered in Comic Sans (the font we all love to hate), which immediately gave me a great idea: disgraceful degradation: displaying the content of our site in Comic Sans if you’re an IE8 or below user. That should make them upgrade.

Where is the ‘story’ in user stories?

There’s an old Jerry Weinberg quote:

“no matter what the problem is, it’s always a people problem”

which pretty much describes every project I’ve worked on over the years. Lately, I’ve particularly noticed that most of the tough problems evident on projects are more people or business problems than technology problems, which makes me think it’s worthwhile for me to continue my exploration of the business/user end of my list of software development roles.

BA = Business Analyst
UX = User Experience
ET = Exploratory Tester
TT = Technical Tester
Dev = Software Developer

In this vein, I’ve recently been trying to understand how to better articulate user stories, in that one day I’d love to work as a business analyst.

Most nights I read stories to my (almost) three year-old son as a nice way to end the day. Lately I have been making up my own impromptu stories to keep things interesting. I have really enjoyed making up stories on the spot; I think I’d be a good BA.

But thinking about user stories along with bedtime stories immediately raises a question: where is the ‘story’ in user stories?

Most user stories at work sound something like this: “As a user, I want to log onto the system, so that I can access my information”. What a shitty story! Sorry, but seriously, if I told this story to my two year old son, he’d die of boredom!

I’ve spent a fair amount of time reading about user stories but I still can’t find out why they’re actually called stories, because I don’t think they are actual stories:

sto·ry/ˈstôrē/
Noun:
An account of imaginary or real people and events told for entertainment: “an adventure story”.

The closest thing I have found to actual user stories is the concept of ‘soap opera‘ testing scenarios which outline implausible yet possible scenarios:

“A man (Chris Patterson) and his wife (Chris Patterson) want to take their kids (from previous marriages, Chris Patterson, a boy, and Chris Patterson, a girl) on a flight from San Francisco to Glasgow to San Jose (Costa Rica) to San Jose (California) back to San Francisco. He searches for flights by schedule. He’s a Super-Elite-Premium frequent flyer, but he doesn’t want the upgrade that the system automatically grants him so that he can sit with his wife and kids in economy class. He requires a kosher meal, his wife is halal, the boy is a vegetarian, and the girl is allergic to wheat. He has four pieces of luggage per person (including two pairs of skis, three sets of golf clubs, two 120 lb. dogs, and three overweight suitcases), where his frequent flyer plan allows him (but only him) to take up to four checked items, but the others can only take two each. He gets to the last step on the payment page before he realizes that he has confused San Jose (California) for San Jose (Costa Rica), so the order of the itinerary is wrong. The airline cancels the flight after it has accepted his bags, and reroutes him on a partner. The partner cancels the flight (after it has accepted the bags) to San Jose (California) so it reroutes him to another competitor, who cancels the flight (after accepting the bags) to San Jose (Costa Rica) and reroutes him to another partner, who goes bankrupt after it has accepted the bags for the San Francisco flight.”

~ Michael Bolton

Now that’s a real user story!

So, I think we have two choices on the user stories front. We can either make our user stories actually like real juicy stories, or at least start calling them something else!

The color of acceptance is gray

James Shore recently wrote some brillant words about acceptance testing:

I think “acceptance” is actually a nuanced problem that is fuzzy, social, and negotiable. Using tests to mediate this problem is a bad idea, in my opinion. I’d rather see “acceptance” be done through face-to-face conversations before, after, and during development of code, centering around whiteboard sketches (earlier) and manual demonstrations (later) rather than automated tests.

To rephrase: “acceptance” should be a conversation, and it’s one that we should allow to grow and change as the customer sees the software and refines her understanding of what she wants. Testing is too limited, and too rigid. Asking customers to read and write acceptance tests is a poor use of their time, skill, and inclinations.

This is pretty much where my head is at right now around automating acceptance tests. Automated tests are black and white, acceptance is gray.

“The color of truth is gray.”
~ André Gide

I prefer to have a handful of end to end automated functional tests that cover the typical journey of a user than a large set of acceptance tests constantly in a state of flux as the system is being developed and acceptance is being defined and changed.

We need to take feedback from the customer that we are building the right thing and ensure our automated tests model this, not make them responsible for specifying the actual tests.

Mobile apps still need automated tests

Jonathan Rasmusson recently wrote what I consider to be quite a contentious blog post about iOS application development titled “It’s not about the unit tests”.

“…imagine my surprise when I entered a community responsible for some of the worlds most loved mobile applications, only to discover they don’t unit test. Even more disturbing, they seem to be getting away with it!”

Whilst I agree with the general theme of the blog post which is change your mind, challenge assumptions:

“All I can say is to keep growing sometimes we need to challenge our most cherished assumptions. It doesn’t always feel good, but that’s how we grow, gain experience, and turn knowledge into wisdom.”

“The second you think you’ve got it all figured out you’ve stopped living.”

I don’t agree with the content.

Jonathan’s basic premise is that you can get away with little or no unit testing for your iOS application for a number of reasons including developing for a smaller screen size, no legacy, one language, visual development and developing on a mature platform. But the real reason that iOS get away with it is by caring.

“These people simply cared more about their craft, and what they were doing, than their contemporaries. They ‘out cared’ the competition. And that is what I see in the iOS community.”

But in writing this post, I believe he missed two critical factors when deciding whether to have automated tests for your iOS app.

iOS users are unforgiving

If you accidentally release an app with a bug, see how quickly you’ll start getting one star reviews and nasty comments in the App Store. See how quickly new users will uninstall your app and never use it again.

The App Store approval process is not capable of supporting quick bug fixes

Releasing a new version of your app that fixes a critical bug may take you 2 minutes (you don’t even need to fix a broken test or write a new test for it!) but it then takes Apple 5-10 business days to release it to your users. This doesn’t stop the one star reviews and comments destroying your reputation in the meantime.

Case in Point: Lasoo iPhone app

I love the Lasoo iPhone app, because it allows me to read store catalogs on my phone (I live in an apartment block and we don’t get them delivered). Recently I upgraded the app and then tried to use it but it wouldn’t even start. I tried the usual close/reopen, delete/reinstall but still nothing. I then checked the app store:

Lasoo iPhone app reviews
Lasoo iPhone app reviews

Oh boy, hundreds of one star reviews within a couple of days: the app is stuffed! I then checked twitter to make sure they knew it was broken, and to my surprise they’d fixed it immediately but were waiting for Apple to approve the fix.

I can’t speculate on whether Lasoo care or not about their app, but imagine for a second if they had just one automated test, one automated test that launched the app to make sure it worked, and it was run every time a change, no matter how small, was made. That one automated test would have saved them from hundreds of one star reviews and having to apologize to customers on twitter whilst they waited for Apple to approve the fix.

Which raises another point:

“[Apple] curate and block apps that don’t meet certain quality or standards.”

The Lasoo app was so broken it wouldn’t even start, so how did it get through Apple’s approval process for certain quality or standards?

Just caring isn’t enough to protect you from introducing bugs

We all make mistakes, even if we care. That’s why we have automated tests, to catch those mistakes.

Not having automated tests is a bit like having unprotected sex. You can probably get away with it forever if you’re careful, but the consequences of getting it wrong are pretty dramatic. And just because you can get away with it doesn’t mean that other people will be able to.

Why hot-desking is a bad idea

Hot-desking (aka hotelling) in open plan offices seems to be growing in popularity, and why not, since it seems to make sense from an financial and collaborative viewpoint. But I am of the belief that it’s actually a bad idea. Here’s why:

It’s unhygienic: Most hot-desk arrangements I have seen involve a thin client PC (eg. Windows Thin PC) on each hot-desk which is used to provide a way for a current user to access their computer session wherever they log in. Since the average keyboard has sixty times more germs than a toilet seat, I actually feel disgusted every time I sit down at a hot-desk and am expecting to use the filthy keyboard all day (much like if someone asked me to set up my computer on a toilet seat).

It’s confusing: Not knowing where someone is each day is particularly confusing, especially for new starters who are getting to know people. Sure, IM solves this situation to some degree, but I’ve spent time roaming the office floors looking for people who I don’t know where they are sitting today.

It doesn’t actually work: Even though organizations have hot-desking so they can cut down on the number of desks and get people to sit together, I have found that people still get established in certain desks as they know they’ll be working there for some time, and they can’t be bothered packing their things up and setting up each day in a new desk. The only time these people seem to get displaced is when they go on leave and someone else has to sit at their ‘hot desk’ amongst the stuff they have left behind. The lack of dedicated space has actually been shown to make employees feel isolated and teamless, amongst other things:

A study released by the University of Sheffield in the UK shows it diminishes the connection between colleagues, and the scattered locations make it difficult for people to communicate with each other.

It continues the obsession with open plan: I seriously don’t like open plan offices and don’t understand why software development workplaces continue to foster them. They encourage constant interruption and distraction which inhibits productivity. Paul Graham explained it best, back in 2004:

After software, the most important tool to a hacker is probably his office. Big companies think the function of office space is to express rank. But hackers use their offices for more than that: they use their office as a place to think in. And if you’re a technology company, their thoughts are your product. So making hackers work in a noisy, distracting environment is like having a paint factory where the air is full of soot.

So what is the answer?

Progressive software companies like Campaign Monitor provide dedicated offices to each team member and a large kitchen table to ensure everyone eats lunch together every day.

I like the idea of providing a dedicated office to each team member to work quietly without disruption, and separate (sound proofed) open areas for team collaboration/socializing. The open areas must be easily booked or can used for an impromptu discussion, and must be clean and connected, to maximise productivity.

As Kelly Executive Recruitment GM Ray Fleming says:

Productivity and motivation are maximised when employees have their own workspace. It helps them to feel part of the organisation and solidifies their position in the team, and businesses need to keep this in mind. Businesses also need to be aware that shifting to hot-desking just to save money may drive some employees to look elsewhere for employment.

The sliding scale of client, stakeholder and management engagement

It’s a beautiful colour, grey, and as soon as you put it into black and white it gets lost.
~ Ian Thorpe

I’ve been thinking about client engagement a lot lately: how much engagement should you aim for when working for someone? Whether that someone be a client, a sponsor of your project, or your boss.

I’ve come to realize that engagement isn’t black and white, engaged or not. It’s a sliding scale, and like most things in life, the sweet spot is somewhere in the middle, not at the outer ends.

On the far right you have a client who is too engaged, to the point of being a micro-manager and a hindrance on your progress. This isn’t ideal, but you can manage the micro-management and it still is possible to get a good deal done in this circumstance by using the high level of engagement to your advantage (to clear roadblocks for example).

On the far left is what I consider to be the most dangerous spot: a client who isn’t engaged whatsoever. The benefit is that you will have freedom to do whatever you want, the obvious downside is you’re inevitably going to fail because you won’t know what your client actually wants, and it’ll eventually unravel that you haven’t delivered. Whilst it’s easy to become complacent in this situation, it should be avoided at all costs: raise the red flag early and often, and if you can’t engage your client, get a new client, or leave!

I also believe this same scale applies to child development. As the father of two young boys I spend a fair amount of my leisure time in playgrounds, where I observe other parenting styles and interactions. On the far right of this scale fit the parents who hold their kids (or make them wear helmets) as they try to climb the jungle gym. On the far left are the parents who read the paper or play with their latest iPhone 5, oblivious to the amazing physical and mental development they are missing meters away. In the sweet spot are those amazing parents whose children are confident to independently learn, whilst knowing their parents are always there if they need help or something goes wrong.

Summary

If you can aim for somewhere in the middle of my scale, where your client is engaged enough to know broadly what they want, but not engaged too much for them to tell you what to do and how to do it, that’s the sweet spot: that’s where you should be.

What are your thoughts? Do you like one particular end or like living in the grey spots like me?

RailsMelon: how to set up Rails 3 Reference Data using Seeds

Preamble

I’ve started learning Ruby on Rails by building a web application for my own startup (sorry but it’s still too early to release full details of what it is).

Like all the things that I do, I like to share my learnings. As I will be doing more Rails stuff than Watir stuff, I was thinking of starting a new blog called RailsMelon, but I changed my mind and have decided to blog about Rails things here along side Watir and Testing things. I will prefix any Rails related posts with RailsMelon so those of you who aren’t interested in such things can safely ignore them. If the Rails content becomes too strong or I receive too many complaints, I will move the content to a separate RailsMelon blog.

Blog All The Things

Rails 3 Reference Data using Seeds

I’ve been trying to work out the best way to load reference data in a Rails 3 application. The inbuilt seeding mechanism (seeds.rb run via rake db:seed or db:setup) seemed the obvious choice to begin with but then I started seeing examples of people using seeds.rb to also load test data which confused me. When I refer to ‘reference data’ I am referring to data that needs to exist for your application to work, whether in development, test and then ultimately production. Test data is different in that I would only use it for testing, and I would never want to load it into production.

I decided upon using seeds.rb (and rake db:seed) but only to specify reference data I will use in production. Test data will be loaded in other ways (which I will discuss in a future post).

I was confused initially because the Agile Development with Rails book actually says you should delete all the data before then creating it when using seeds.

Product.delete_all
Product.create(title: 'Programming Ruby 1.9')

If I do this for reference data, it will continually delete/create the reference data and cause problems with other models that have related data.

To avoid this, I thought I could check to see if the object already exists, and only create if it didn’t.

Product.create(title: 'Programming Ruby 1.9') unless Product.find_by_title('Programming Ruby 1.9')

This is slightly better, but I then realized that if you have a uniqueness constraint on a property of your object, then you don’t even need to check that the object exists, because it will only create it if it’s not already there (but if you don’t have a uniqueness constraint it will create your object every time).

Product.create(title: 'Programming Ruby 1.9')

If you ran this on a blank database ten times, you would only get one ‘Programming Ruby 1.9’ product because title is unique, and each run of rake db:seed would be successful, which is exactly what we want.

Making it prettier for multiples

With reference data I find there are often lots of items, so instead of calling .create methods line after line, you can store the items in an array or hash and iterate over them:

A simple list of colors:

colors = %w(blue red green orange brown)
colors.each { |color| Color.create name: color}

A more detailed list of users:

users =
    [
      { given_name: 'Admin', surname: 'User', email: 'admin@test.com', password: 'password', password_confirmation: 'password', admin: true },
      { given_name: 'Standard', surname: 'User', email: 'user@test.com', password: 'password', password_confirmation: 'password', admin: false }
    ]
users.each { |user| User.create user }

Summary

I have found that Rails 3 seeds provides an easy way to manage reference data, but it shouldn’t be used to manage test data as well. As reference data is needed in all environments, it should be repeatable and therefore shouldn’t be destructive to existing data, so avoid using statements such as delete_all in your seeds.rb file.