Saving TDD from itself

analysis
Mar 4, 20154 mins

Unit testing is now widely practiced and poorly understood

Saving TDD from itself

Before Extreme Programming and TDD (Test-Driven Development) were absorbed into the Agile movement, unit testing was well understood and infrequently practiced. Some fifteen years on, unit testing is now widely practiced and poorly understood.

Recently, many loud and repeated calls to dump TDD have been made. If this results in dumping unit testing as it has come to be, I’m all for it. But let’s not be too hasty.

Development-Driven Testing

Although it is called “test-driven,” the key to understanding TDD is its second D: development. The primary role of TDD is to produce a programmer’s scaffolding in self-checking micro-steps. This forces programmers to make their notional solutions concrete from the start, which tends to prevent questionable experiments or constructs (“code-smells”.) If followed diligently, TDD has the beneficial side-effect of producing an executable regression test suite that is kept consistent with the codebase under test. This is necessary for continuous integration and has enabled rapid development cycles. TDD achieves all this through simple test cases programmed in a standard framework at the same time as application code is written — hence the “test-driven” name.

However, I’ve always thought that DDT, Development-Driven Testing would be more apt. The TDD canon has next to nothing to say about what to test and why. As a result, “testing” in TDD has come to mean the creation of a programmer’s scaffolding that minimally exercises basic functionality. In its fifteen-year arc, TDD has been the subject of hundreds of conferences, webinars, articles, blogs, and at least a dozen books. In all that I’ve seen, there are no new ideas for test design—only elaborations of technical minutiae for x-unit frameworks and mocking tools. The essential problems of test design, test oracles, and test effectiveness are not part of the TDD vocabulary.

The TDD backlash 

With the enthusiasm that Agile brought to TDD, its development focus engendered a kind of cargo-cult that produced oceans of ineffective test code. I’ve seen, time and again, tens of thousands of TDD-produced tests that are worse than a waste of time. For example, the test object sets a widget (e.g., check box, scroll frame) foreground to green, then asserts that the widget’s get method for the foreground color property returns green. The test passes if the returned property is green. This is repeated for every widget in the UI, checking its returned color. Developers can claim they have thousands of tests that pass, but the result is meaningless and brittle.

This so-called “testing” is a waste. It consumes programmer time and energy that could be better applied to effective testing. It often results in a bloated test-codebase that rapidly becomes unmaintainable. As a result, it is not uncommon to see TDD-produced test-codebases abandoned for being too brittle and letting too many virulent bugs escape.

Not surprisingly, there’s been a backlash to this waste. David Hansson’s reflections are instructive.

Saving TDD from itself

As a programming strategy, TDD achieves many useful results. As a testing strategy, TDD cannot be trusted.

TDD can be extended to achieve meaningful testing without the waste. Here’s how.

  • Develop your code following TDD practices, except for its feeble testing notions. Instead, just implement a few simple smoke tests for each method or message in your x-unit framework.This will produce a useful scaffolding and achieve TDD’s  development benefits.
  • Maintain code hygiene at all times using static analyzers and style critiquers.
  • When your components are complete for a sprint or release, design a test suite that fully exercises the variation of their input and configuration domains, causes and effects, and activation sequences. The test design patterns in Testing Object-oriented Systems explain it all. Implement these tests with your x-unit framework.
  • Run the tests and correct any bugs you find.
  • Instrument your code and rerun the test suite to measure the decision coverage (at least) or mutant-kill ratio (or better yet, both) that your test suite achieves.
  • Scrutinize the code that isn’t reached or hides mutants. Determine why, then tweak your test suites to reach these blocks and/or kill the mutants. Make judicious use of mocks only as necessary to reach coverage goals.
  • Repeat until all tests pass and your test suite produces at least 85% coverage or mutant kill-ratio. Evaluate the risks of failures in any uncovered/hiding code. Stop testing if the risk is tolerable and document your analysis in a test report. If the risks aren’t acceptable, tweak your tests and repeat.

TDD’s scaffolding and focus are too good to lose. Adding test suites designed to find bugs replaces expensive waste with effective verification.

Robert V. Binder is a high-assurance entrepreneur and President of System Verification Associates. He has developed hundreds of application systems and advanced automated testing solutions. As test process architect for Microsoft’s Open Protocol Initiative, he led the application of model-based testing to all of Microsoft’s server-side APIs. He is the author of the definitive Testing Object-Oriented Systems: Models, Patterns, and Tools.

Binder began his software career in 1974 as a programmer. In 1979, he became employee number six in the Chicago office of a national contract programming company. As a developer of systems for a wide range of business and embedded applications, he used proven software engineering techniques to achieve high quality and control costs. He was promoted to account manager and then project manager. As project manager, he established a perfect track record for on-time, in-budget completion of complex multi-year software development projects.

In 1984, he founded RBSC Corporation to provide consulting in software engineering and software process improvement. He planned and facilitated many software process improvement projects, concentrating on CASE tools, methodologies and project management. RBSC augmented technical consulting with training services. After several years as an software engineering instructor at Motorola University, Binder developed sixty-five days of instructor-led professional development seminars in related subjects and presented them to thousands of developers in North America, Europe, and Asia.

In 2001, Binder founded mVerify to commercialize his unique model-based testing strategy. In 2003, he proposed and closed a $1.9M grant from the US National Institute for Standards and Technology (NIST) to fund mVerify's development of model-based testing for mobile systems. He led this project to successful completion in June 2005. As CEO, Binder was responsible for subsequent funding, eventually raising another $1.5 million from non-institutional sources. He also led development, marketing, and sales of five releases of mVerify’s commercial testing product for the Windows Mobile platform.

In 2009, Binder founded System Verification Associates to enable high assurance system development. Client engagements include assessment and improvement for a leading FDA-regulated software product company, Process Architect for Microsoft’s Open Protocols initiative, and process improvement for a leading Healthcare IT ISV.

The opinions expressed in this blog are those of Robert V. Binder and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.