By Bill Lamie
President/CEO, PX5
In a world of disagreement, virtually all experienced software developers would agree that there can never be enough software testing! We have all experienced the feeling that the latest software release is defect-free, only to be confronted with yet another issue.
Fortunately, I gained this valuable experience in my early 20s while working at a defense contractor on an operating system for a military application. The delivery requirements for our operating system included error-free passing of a 24-hour stress test, where operators would load the operating system in every conceivable way possible and try to bring it down. The sheer notoriety of being the one to crash the operating system proved a great motivator for all operators. When we submitted our release for the stress test, I felt confident we wouldn’t have any problems since we had spent considerable time in code reviews, testing, etc. Sure enough, for the first four hours, everything worked perfectly… then crashed!
We quickly fixed the underlying issue and resubmitted, naively thinking we would have no further problems. Then, another crash happened eight hours later into a new 24-hour test. This cycle repeated several times before we finally passed the 24-hour stress test. The lesson I learned from this is that there is never enough testing. Software always has defects, which often manifest within different probability windows.
Virtually all software defects result from insufficient testing. Not having the tests to identify the defect allows it to be released into production. A common reason for this is that testing is often an afterthought. Developers frequently write lots of code, focusing solely on achieving the desired output. Only afterward is the regression test suite created. Since the software demonstrably works in typical situations, a false sense of security can arise. There may also be schedule pressure to get to market. These factors negatively impact the effort to create a complete test environment.
So, what can be done to improve testing? Testing must be moved from an afterthought to a primary activity in the software development process. Over the last several years, I’ve become a fan of Test-Driven Development (TDD), and I've been using it for all of my development efforts. In this methodology, the tests are written before the implementation. For example, when I implemented the POSIX pthread API pthread_create in PX5 RTOS, I started with a simple test for this API. Initially, the test didn’t compile because I hadn’t written the code for pthread_create. Once the implementation was written, I could then thoroughly test the API. Extending TDD to require complete code coverage of the implementation is also a worthy goal. For example, we require 100% statement and branch-decision coverage testing through the product codebase, representing a significant extension to standard testing. If we had used this methodology in the early operating system I worked on, we might have passed the 24-hour stress test on the first try!
The old saying “an ounce of prevention is worth a pound of cure” is certainly relevant to software development. Whether or not TDD is the right choice for your application, it’s clear that elevating the importance and thoroughness of your testing is a worthwhile ounce of prevention! And even then, it’s still not enough!