Your Mission, Should You Accept...
You've been tasked with building a sports car. Not just any sports car, but the Ultimate Driving Machine.
|The Ultimate Driving Machine|
Let's take a look at how an Agile team might handle this...
Acceptance Test Driven Development
What would a customer want from this car? Excitement! And perhaps a degree of safety. Let's create a few user stories or acceptance criteria for this (the line between those two will remain blurred for this post):
- When I punch the accelerator, I'm pushed back into my comfortable seat with satisfactory acceleration.
- When I slam on the brakes, the car stops quickly, without skidding, spinning, or flipping, and drivers behind me are warned of the hard braking.
- When I turn a sharp corner, the car turns without rocking like a boat, throwing me against the door, skidding, spinning, or making a lot of silly tire-squealing noises.
In order for us to write truly clear, repeatable "acceptance tests" for a BMW, we would need to get much more specific about what we mean by "punch", "satisfactory", "slam", "sharp". In the software world, this would involve the whole team: particularly QA/Test and Product/BA/UX, but with representation from Development to be sure no one expects Warp Drive. The team discusses each acceptance criterion to determine realistic measurements for each vague, subjective word or phrase.
DONE DoneWhat levels of fast, quick, fun, exciting, and safe are acceptable? What tests can we run to quickly assess whether or not our new car is ready for a demo? How will we know we have these features of the car fully completed, with acceptable levels of quality, so that we don't have to return to them and re-engineer them time and time again?
Once an acceptance test passes (and, on a Scrum team, once the demo has been completed and the stories accepted by the Product Owner), they become part of the regression suite that prevents us from ever allowing these "Ultimate Driving Machine" qualities from degrading.
Now the engineers start to build features into the car. A quick architectural conversation at the whiteboard identifies the impact upon various subsystems, such as chassis, engine, transmission, environmental/comfort controls, safety features.
What would some unit tests (aka "microtests") look like? Perhaps these would be examples (keep in mind that I'm a BMW customer, not a BMW engineer, and have little idea of what I'm talking about):
- When the piston reaches a certain height, the spark plug fires.
- When the brake pedal is pressed 75% of the way to the floor, the extra-bright in-your-face LED brake lights are activated.
- When braking, and a wheel notices a lack of traction, it signals the Anti-Lock Braking system.
I used to own a BMW. I couldn't do much to maintain it myself, except check the oil. I would lift the hood, and admire the shiny engine, noting wistfully that cars no longer have carburetors, and I will probably never again perform my own car's tune-up.
Much of what makes a great car great is literally under the hood. Out of sight. Conceptually inaccessible to Customers, Product Managers, Marketers...even most Test-Drivers. What makes the Ultimate Driving Machine work so well is found in the domain of the expert and experienced Engineer.
In the same way, unit tests are of, by, and for Software Developers.
What's the Difference?In both cases, we write the tests before we write the solution code that makes the tests pass. Though they look the same on the surface, and have similar names, they are not replacements for each other.
- Each test pins down technical behavior.
- Written by developers.
- Intended for an audience of developers.
- Run frequently by the team.
- All tests pass 100% before commit and at integration.
- Each test pins down a business rule or behavior.
- Written by the team.
- Intended for the whole team as audience.
- Run frequently by the team.
- New tests fail until the story is done. Prior tests should all pass.
Behavior Driven Development
For a long time no one could clearly express what "Behavior Driven Development" or BDD was all about. Dan North coined the term to try to describe TDD in a way that expressed what Ward Cunningham really meant when he said that TDD wasn't a testing technique.
Multiple coaches in the past (me, included) have said that BDD was "TDD done right." This is unnecessarily narrow, and potentially insulting to folks who have already been doing it right for years, and calling it TDD. Simply because many people join Kung Fu classes and spend many months doing the forms poorly doesn't mean we need to rename Kung Fu. (Nor should we say that "Martial Arts" captures the uniqueness of Kung Fu.)
I witnessed a pair of courageous young developers who offered to provide a demo of BDD for a meetup. They used rspec to write Ruby code test-first. They didn't refactor away their magic numbers or other stink before moving on to other randomly-chosen functionality. "This can't be BDD," I thought, "because BDD is TDD done well."
TDD is TDD done well. Nothing worth doing is worth doing incorrectly. I had been using TDD to test, code, and design elegant software behaviors since 1998. I wanted to know what BDD adds to the craft of writing great software.
I can say with certainty that I'm a big fan of BDD, but I'm still not satisfied with any of the definitions (and I'm okay with that, since defining something usually ruins it). A first-order approximation might be "BDD is the union of ATDD and TDD." This still seems to be missing something subtle. Or, perhaps there is so much overlap that people will come up with their own myriad pointless distinctions.
However we try to define it in relation to TDD, BDD's value is in the attention, conversations, and analysis it brings to bear on software behaviors.
In hindsight, I have already seen a beautiful demo, by Elisabeth Hendrickson, of TDD, ATDD, and (presumably the spirit of) BDD techniques combined into one whole Agile engineering discipline.
She played all roles (Product, Quality, Development) on the Entaggle.com product, and walked us through the development and testing of a real user story. She broke the story down into a small set of example scenarios, or Acceptance Tests. She wrote these in Cucumber, and showed us that they failed appropriately. She then proceeded to develop individual pieces of the solution using TDD with rspec.
Then, once all the rspecs and "Cukes" were passing, she did a brief exploratory testing session (which, by definition, requires an intelligent and well-trained human mind, and thus cannot be automated). And she found a defect! She added a new Cuke, and a new microtest, for the defect; got all tests to pass; and demonstrated the fully functioning user story for us.
All that without rehearsal, and all within about 45 minutes. Beautiful!
* I have a draft post that further describes, compares, and contrasts the detailed practices that make up ATDD and TDD, along with a little historical perspective on the origins of each. For today, I wanted to share just the Sportscar Metaphor. It's useful for outlining which xDD practices to use, and how they differ.