Years ago, I was first introduced to the practice known as “test-driven development.” I’m not a QA expert myself, but for those who aren’t familiar, test-driven development involves writing basic automated tests of core functionality, watching them fail (because you haven’t actually built the functionality yet), and then writing the simplest code possible that will get the test to pass. For a fuller explanation, you could do worse than this Wikipedia entry. At the time I first heard of it, the idea seemed bizarre. In addition to simply not understanding, technically, how such a methodology could be implemented, I found it hard to imagine why it would be useful to do development in this way. Now, however, having thrown myself into the role of developer for the software I’m building, my perspective has changed significantly.
First, I should explain that I probably would not have voluntarily chosen test-driven development as my methodology. Thankfully, the primary resource I used to gain a foundation of knowledge in Ruby on Rails, Michael Hartl has a far, far wiser development mind than I do. His invaluable book Ruby on Rails Tutorial fully utilizes test-driven development throughout. Thus, by the time I was ready to write real code, I was already more or less in the habit of writing tests first, and then writing the code to make them pass. Sure, I could have skipped it. It does add a non-trivial amount of effort up front to anything you’re building — more on that later. But I guess I had a certain degree of faith that if Hartl thought it was a good idea, it probably was. So I kept it up. And I’m glad I did.
As you have probably inferred, in the fullness of time, I’ve become a big fan of test-driven development. So much so that when I am forced to build something without having written the test first, I actually feel the same as I suspect I would leaving the house in the morning without getting dressed first. I feel naked. But before I get into why I’m a fan, let me first point out the two main downsides I see with test-driven development.
The Downsides
The most annoying downside has less to do with the discipline than it does with the current state of the art in open source automated testing. Put as simply as possible, certain flavors automated testing for Ruby on Rails just don’t seem to be reliable. Specifically those that rely on in-browser automation. For whatever reason, automated testing of ajax functionality requires a setup that leads to non-deterministic results. Even though I’m running the same set of tests every single time and have changed nothing between tests, sometimes certain tests succeed and sometimes they fail. I spent literally two solid days a few weeks ago debugging my tests. Not my code, my tests. In the end, after all that debugging, and after also spending a good deal of time researching things on Stack Overflow to no avail, I more or less gave up. I still have the tests, and certain components continue to sporadically fail like that flickering fluorescent bulb in your office you wish someone would change. I’ve learned to live with it, but it’s unnerving.
That leads to the second downside: time and effort. Even when testing works exactly as it should, it requires a non-trivial amount of effort (and thus, time) to think through and write the test. Where I’d ordinarily prefer to sit down and start writing code for my new function, I’m forced to spend several hours writing out the tests for the feature I haven’t yet built. This requires, at times, a certain spirit of mindfulness and patience which often conflicts with the compelling urgency (verging sometimes on panic) that I feel when I wake up every single morning. But I do it 1) because I have discovered I do have that kind of mindfulness and 2) because…
It Is Worth It
I experienced the benefit for the first time when I wrote what I thought was some innocuous code in an obscure corner of the application, and all of a sudden, wide swaths of testing in what seemed to be completely independent areas started failing. Oops. Whatever I did, I just broke something major. This has actually happened to me several times, and I can pretty much guarantee that without the tests — which I run after each new function I’ve built — I would likely not have noticed anything wrong until days later when I finally, on a whim, decided to try out that portion of the app, possibly long, long after I did the actual damage in the first place. All that time and mental distance between cause and effect makes it extremely difficult to track down the source of the issue. And that assumes I even discover it at all. Suddenly, that few hours of upfront time are now paying huge dividends, allowing me to find significant flaws at nearly the moment they’re created, radically reducing the time and effort I spend debugging.
The other big upside is, perhaps, more subtle and intangible, but speaks to me as both an interaction designer, and also as the founder, CTO and COO of the company I’m building around my software. As you may have inferred, my methodology is agile by necessity. I have minimal time and resources, and as a result, I need to keep my overhead as low as possible. Unlike what I would ordinarily expect in a larger organization, I keep documents as lean as possible. Where in a larger team I’d be writing full-on requirements, instead I keep rough lists of functions in a loose note form. Instead of doing full on wireframes in Omnigraffle or Visio, I’m sketching in my notebook. And where it’s possible (and I’m feeling confident), I skip the documentation altogether. Anything that will save time and still deliver a quality result. But it's a fine balance between speed and quality, one in which I’m always refining and adjusting.
And one of the things I’ve found is that test-driven development helps that balance. Earlier I wrote that one of the downsides is the up-front time and effort of having to write tests for every function before coding. Ironically, that is also an upside. Despite how it feels, in the process of being “slowed down” from the speed I feel at which I should be moving, I’m also being forced to think things through to an extent that otherwise I might not. As an interaction designer, the simple exercise of forcing myself through a use case (which you must do in order to write the test) is akin to something interaction designers always strive for: thinking like a user. Sure, it’s a relatively crude way to do that compared to full-on interaction design methodologies. But it really does work. Being forced to think through something step-by-step from a testing perspective also forces me to really consider the assumptions I’m making about how people will use the product in the first place. That’s always a good thing. It pleases me as a designer to accomplish that, whatever the means. And it as the leader of this company, it keeps me honest, not allowing me — as much as I might often feel the compulsion to do so — to sacrifice the very product at the core of this company in order to meet some arbitrary deadline I’ve set for myself. It helps me keep both the product and the company on track.