Posted on: Saturday, January 23rd, 2016
Test-Driven Development: A software development process that relies on the repetition of a very short development cycle. First, the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test, and finally refactors the new code to acceptable standards. (From Wikipedia)
We’ve had a good bit of success using test-driven development (TDD) on rewritten systems. It’s the ultimate flattery when a client asks you to rewrite a system to utilize a more modern technology that you originally built for them a decade or so ago; they come back to Logical Advantage because, thanks to a successful partnership in the past, they know they can trust us.
It’s our job to guarantee that the new and rewritten system produces the same correct results as the old system.
Here are two examples of rewritten systems on which we’ve performed TDD:
For both of the above systems, we generated many thousands of tests from the previously saved results in each system’s database. We also coded the new algorithms to pass the tests before adding any new features.
It is so nice to add new features to an application and to know that we aren’t breaking any of the system’s computations or dependencies; plus, maintaining and testing each system is now much easier. With TDD, we can be confident that the client won’t see any bugs from previously working computational images.
That said, manual integration testing is still required. Just because the engine calculates the correct result doesn’t mean that it will continue to be correct in the database or even that it will be displayed correctly in the user interface. Even with thousands of scripted unit tests, we had to manually test about a hundred orders to make sure we’d worked all of the issues out.
Our manual tests consisted of database scripts, which compared the final results saved in both systems. It’s difficult for the eye to catch small differences between hundreds of properties, so we wrote scripts that compared the results in the legacy database with the new system. As humans, we are prone to overlook certain issues in testing, so even our manual testing involves automated tests to prove that the answers are the same.
Thanks to the use of TDD, we were able to roll out large systems with relatively few issues. And the “issues” were hardly issues at all; rather, they were new features that the customer would like, rather than bugs from things not working correctly.