Just mention the word testing and most software developers cringe. I know I look for the nearest exit when the topic comes up. No, that's not to say that we developers want to produce buggy code. It's just that the testing phases of software development can be time consuming and so monotonous.
What developers need are decent, automated tools that help test software and find the problems and performance bottlenecks automatically, with as little developer intervention as possible, then let us do the fun parts of eradicating the problems and tuning up the code. Fortunately, there are quite a number of good testing tools on the market today that developers can use for the three basic categories of testing: runtime analyses (code coverage, performance, and error detection), simulation, and metrics. Let's take a more detailed look at each of these code-testing techniques and examine some example products.
Bugs on the run
Runtime analyses evaluate your code during actual program execution, checking how often and well segments of your program perform. Code coverage during runtime analysis shows portions of your program and how often each actually executes your test data so you can redesign it to better exercise the program's code. Most testing tools give you some form of code-coverage information. They use source-code scanning or object-code insertion technologies.
An example of source-code scanning is the Branch Validator from HP (Fort Collins, CO), one of the few tools that specifically tests for code coverage. The tool parses your program's source code, finding and inserting test probes into key segments. After recompilation and during test execution, the probes automatically report when their branch in the code executes. The resulting code-coverage data tell you not only how much of your code executed during testing, but let you focus on which portions executed and how often they were exercised. Another code-coverage testing product is Pure Coverage from Pure Software (Sunnyvale, CA). It uses the technique of object-code insertion, implanting probes into key places of the object code, rather than in source code.
Another key area for runtime analysis is performance: How many times a function or subroutine is called during your program's execution and how much time is spent in each function. This information is invaluable for tuning, since more than 90 percent of most programs' execution time commonly is spent in less than 30 percent of the programs' routines. Identifying and tuning those most active portions of the program can yield quite dramatic improvements in overall performance. Pure Software's Quantify and HP PAK are examples of performance analysis tools that have very similar features. Quantify comes with an API for calling performance functions at runtime, either in your code or in a debugger, which helps you selectively analyze certain portions of your app.
I'll get you, my pretty
Perhaps the most important area of runtime testing is error detection -- finding the bugs that appear only during program execution and are not due to any obvious coding errors. Examples of runtime errors are memory leaks and invalid array mathematics. With C or C++, it is easy to move a pointer off of a legal array boundary. Depending on the logic of your program, execution might not stop when such an error occurs but will manifest some strange behavior later. If you have ever tried to manually track down array-related bugs, you will know how hard it can be to correct the errors without the help of a tool.
TestCenter from CenterLine Software (Cambridge, MA) is a comprehensive suite of testing tools that includes a C and C++ interpreter that lets you exercise and test your code throughout the development process and discover runtime errors that compilers cannot detect. TestCenter includes memory-leak and code-coverage testing tools. Pure's Purify focuses on memory-related problems, including ones that occur when space is allocated during program start-up, and those appearing dynamically during execution. Purify is best known for its ability to find memory leaks or places in your code where memory pointers are discarded.
Another tool that combines code coverage and runtime analysis is Insight from Parasoft Corp. (Pasadena, CA). Insight has a good set of capabilities and is worth adding to your list.
However much testing and bug fixing you do, it isn't bulletproof until users test it. Evaluating the performance of your software in the hands of real humans is one of the most important testing practices and should happen throughout all phases of development. However, real humans aren't thorough enough -- our fingers are too slow and can't possibly test all the fringes of acceptable input. So, we make testing tools that act like real humans, except the tools work a lot faster and more efficiently (isn't that the point?).
For example, for terminal-oriented programs, developers traditionally have written shell scripts that simulate keyboard input and compare character output from the test program to an expected form. Easy enough, but today's event-driven style of programming (versus text menus) and accompanying graphical user interfaces (GUIs) are considerably more complicated and quite a bit more difficult to test, at least automatically. Nonetheless, some rather innovative companies have tackled the problem and offer GUI testing products.
ViSTa from VERITAS (San Jose, CA) and XRunner from Mercury Interactive (Santa Clara, CA) are example GUI testing tools. Both employ test scripts, which are recorded user motions and actions that are then automatically played back to the application for testing. The products differ in the way developers edit their scripts (versus the tedious re-recording an entire script), particularly when certain functionality changes in the GUI, such as if you remove an editable output area from the screen. ViSTa uses Tcl (Tool Command Language) as the test script/editing language, a nonproprietary window shell that is rapidly becoming popular among software developers. XRunner uses a proprietary language, although its attraction is the ability to deal with cross-platform apps, testing both Motif and Windows versions of your program.
Beyond user-interface testing, particularly with GUIs, another major area for code testing is runtime simulation. Often you will find that portions of your program just never get executed, even under fairly drastic testing conditions. That's usually because good developers try to anticipate and gracefully handle even the rarest of error conditions. But these rare conditions are also difficult or impractical to produce and, hence, test.
For example, it's a good idea for your X Window-based program to check whether the X server has returned a font structure or NULL when allocating a standard font. Because the font usually exists on the test machine, that error branch is never taken. Should you remove the font from the X server to force the error? Perhaps, but manually creating ways to force all possible error conditions for testing is usually not easy or practical. Rather, it may be easier to simulate rare errors with tools. Still, the general problem of simulation remains a difficult one, and current tools are rather rudimentary.
Finally, there's testing metrics. Metrics tools analyze source code to determine how changes in one area might affect other portions of the application. Metrics testing tools help to understand the structure and the runtime detail of your program, so you can understand where to best apply your testing efforts.
Metrics testing tools start with code-coverage information and focus on the measurements of your source code, such as the complexity of a function. Hindsight from ASA (Santa Clara), for example, combines code coverage, metrics, and performance analysis into a single package, while the McCabe Tool Set from McCabe & Associates (Columbia, MD) is a collection of tools focused on code coverage, complexity analysis, reverse engineering, and data tracing. However, today's metrics tools, including Logiscope from Logiscope (formerly Verilog; Dallas), tend to be more quality assurance, not development, although they can be very useful when employed by developers. Programmers are unlikely to wade through the complex information that metrics tools generate; you'll need a company mandate to make them.
Testing, testing, testing
Okay, I talked about testing in this column and, against your instincts, you were kind enough to read on. Fortunately, the many testing tools -- some very fine ones I didn't have a opportunity to mention -- make the job tolerable, even though it's never an easy task: Just ask any of the major software vendors (Microsoft for example) that haven't delivered code on time. Remember, testing often takes as long as design and coding combined, and not many of your programmers and developers look forward to the grind.
About the author
Brian Fromme is president of Fromme Custom Solutions in Fort Collins, CO. He can be reached at email@example.com.
If you have problems with this magazine, contact firstname.lastname@example.org
Last updated: 1 January 1995.