Click on our Sponsors to help Support SunWorld

How new testing tools are reducing downtime
and improving software quality and performance

Is expensive downtime killing your bottom line?
Learn how one company saved time and money
through improved testing procedures

By Barry D. Bowen

SunWorld
December  1996
[Next story]
[Table of Contents]
[Search]
Subscribe to SunWorld, it's free!

Abstract
Sure, rapid application development means developers can build more applications in less time. This doesn't count for much, however, if the quality isn't up to par. So much emphasis has been placed on fast deployment that software testing tools couldn't keep up. But new attention is being given to development of these tools, especially now that they're migrating to the Web. Companies are rediscovering the importance of software testing and are learning that test processes and strategies must be religiously practiced. (1,900 words)


Mail this
article to
a friend
It's just a simple matter of software -- or so goes the classic programmer's retort. And if you can't buy it or reuse it, then you have to develop it. But software is seldom if ever simple, and applications are inherently complex. At some level flaws are always hiding in the virtual fabric, waiting to be exposed.

The pace of technology is not making things easier. Monolithic mainframe-centric applications continue to give way to client/server architectures, demands for flexibility move applications from two-tier to three-tier models, and now the Net beckons applications onto internal and external Webs.

Quality assurance and testing used to be a fairly straightforward proposition many years back, but no longer. Indeed testing must now be integrated into every phase of the computing life cycle.

New categories of development tools -- rapid application development (RAD) or visual development tools -- empower programmers to do more in less time. They also empower non-programmers to participate in the development process. Time to market is crucial. Delays are costly.

"All of a sudden the problem seems to have gotten far more complex," says Dick Heiman, a senior analyst for application development tools at market research consultancy International Data Corp. (IDC) in Framingham, MA. "The evolution of these tools did a fabulous job of empowering professional programmers -- even non-professionals -- to build very sophisticated applications fairly quickly. The testing tools fell behind the evolutionary curve."

That said, the market is working overtime to catch up. Heiman says the value of GUI and client/server testing tools went from $23 million in 1993 to an estimated $185 million this year. He is projecting a 60 percent compounded annual growth rate through the year 2000, which will put worldwide revenue -- excluding services -- over the billion dollar mark. (See figures at end of story.)

The testing tools landscape
Generally, automated testing software falls into one of several categories: development test tools, GUI and event-driven test tools, load testing tools, and bug tracking/reporting tools.

Error-detection products such as CenterLine Software's C++ Expert, identify specific kinds of bugs that slip past compilers and debuggers. Problems such as memory leaks, out-of-bounds arrays, pointer misuse, type checking, object defects, and bad parameters are typically caught with this type of testing. Catching the problem here saves a lot of time in later phases of development.

Graphical User Interface (GUI) testing tools attempt to automatically exercise all the elements of application screens to ensure they work properly. Test scripts can usually be defined manually or by capturing user activity and then simulating it. This kind of regression testing often simulates hours of user activity at the keyboard and mouse. Since the testing is based on scripts, it can be saved and repeated. This is crucial in enabling developers to easily validate an application interface after even minor changes have been made in the code.

GUI testing naturally evolved into client/server testing as the feasibility of testing more features in a distributed environment seemed within reach, says IDC's Heiman. The dividing line between GUI, client/server, and load testing tools is one of degrees.

Load testing tools permit complex applications to be run under simulated conditions. This addresses not only quality, but performance and scalability as well. Such stress tests exercise the network, client software, server software, database software, and the hardware server. By simultaneously emulating multiple users, load testing determines whether an application will support its intended audience. A capture program similar to those in GUI testing tools helps automate building scripts. Those scripts can be varied and replayed to simulate not only many users, but varied tasks as well.

Load testing charts the time a user must wait for screen responses, finds hidden bugs and bottlenecks, and gives developers the chance to correct them before an application is deployed. Hardware, software, database, and middleware components are stress tested as a unit, providing more accurate performance numbers. Again, because testing is controlled via scripts, tests are repeatable. If you add an index to a database and rerun the test, you can quantify the specific performance impact of that one change.

Load testing can help predict how a system will perform as usage increases. Tools generally permit user loads to be incremented and tracked so that performance degradation can be clearly isolated.

When applications must support a greater number of users, load testing quickly determines the outcome regarding quality and response time. Developers can re-use scripts to alter the user levels, transaction mixes and rates, and the complexity of the application. Load testing is the only way to verify the scalability of components as they work together.


Advertisements

A case in point
Even as testing tools are catching up on development technologies, modern IT managers are learning that quality and performance are not ensured simply by selecting good testing tools. Proper testing processes and strategies must be ingrained into the corporate culture.

RAD without quality accomplishes nothing. IT managers need to stop fixating on what testing tools to use, and focus on the real issue, which is how to get the job done well. The testing methodology must be integrated with the development methodology, says Matt Hotle, a research director specializing in quality issues at Gartner Group (Stamford, CT).

Compartmentalizing testing into discreet and linear steps ignores larger issues until the final phases of development or early deployment. In today's client/server world many applications depend upon several computers, various application modules, and the network to function well. Even if all the pieces work well independently, this does not mean they will perform well as a unit.

"We have to do a better job of bringing quality control functions in at the beginning of the life cycle," says Mike Bauer, director of planning services for EDS' (Dallas, TX) client server group. Getting users and testers involved at the earliest stages of development is crucial to producing quality applications more quickly. Processes must be more inclusive and comprehensive, he says.

An hour's worth of down time in one of the customer care centers of Lucent Technologies (Parsippany, NJ) is worth $50,000, so the firm takes application and network quality issues very seriously, says Michael Maslowski, customer support and technical architecture director for the $2 billion firm created by AT&T's restructuring.

Early in Maslowski's client/server experience, development schedules were slipping by three or more months. He attributed this to the complexity of testing the GUI- and event-driven aspects of the application. Manual testing required 20 or 30 users to pound away at an application and try to find problems. Even as problems were identified and solved, the lack of controlled regression tests meant that there was no way to ensure the fix did not break something that previously checked out. The manual process, therefore, remained hit or miss.

Automated testing tools not only freed up a great deal of manpower for Maslowski, they also provided greater control. Once test scripts were defined they could be saved and repeated whenever necessary.

The use of quality assurance testing tools in the development process will not suffice, however. Just because an application system works well when it is deployed, doesn't mean it will continue to function problem-free over time. Minor tweaks and modifications can take a cumulative toll, and systems that run well with 100 users may break with 200.

A case in point was Lucent's Customer Representative Service System (CRSS). Maslowski was handed the responsibility of nursing this ailing client/server help desk system back to health.

CRSS used PowerBuilder-generated client interfaces, 3270 terminal emulation code to access some mainframe functions, a custom-built CASE-based reasoning module, and a Sybase database. Maslowski needed to figure out why the application could not support 170 of the 300 service representatives that had to use the system.

Many complex applications have a steep knee in the performance curve, Maslowski says. They scale well within certain parameters, but then everything can fall apart. Determing whether there is faulty code, a lack of hardware horsepower, or poorly structured databases, is very difficult to do without a testing lab and automated tools.

AT&T selected Softbridge Technologies' (Edison, NJ) Automated Testing Facility (ATF) and kept the CRSS hobbling along for production users, while simulated users exercised another copy in the lab, 24-hours a day for two to three weeks.

Having restored the CRSS, Maslowski now makes load-testing an essential part of quality screening and is using it to monitor production systems.

Automated scripts execute during the work day to measure the response times of production applications. That data is logged and tracked, so that support engineers are able to see a developing problem before it becomes a crisis.

In the testing phase of new or revised applications, Maslowski's team defines the expected break points -- where scalability will begin to fail. Whether or not the data result in engineering changes before deployment, IS has a clear projection of the amount of growth the application can handle and can use that data to avoid unpleasant surprises.

On the Web
With database intensive applications migrating to the Web it is easy to see how load testing can greatly benefit companies. Load testing firms are now beginning to deliver the goods.

Extending load testing to the Web will allow far greater control and predictability before Web-based applications are deployed for the world to use. Like client/server database applications, load testing will permit applications to be tuned during development or permit users to quantify the performance effects of changes made to production applications.

"The Web is a natural extension of client/server technology, so it is natural to expect load testing to migrate to the Web," says IDC's Heiman. He adds that this is, however, a brand new market niche that is starting out at zero in 1996.

Pure Atria (Sunnyvale, CA) enhanced the technology it acquired when it purchased load testing veteran Performix. PurePerformix/Web now simulates calls from clients using HTTP, and permits real-world Web application simulations -- including those using Java applets -- with nearly an unlimited number of users.

"As the Web takes off, more companies will want to leverage it for deploying mission-critical applications. Load testing is the critical element to this evolution," says Jeff Straathof, business unit manager of Pure Atria's load testing products.

Mercury Interactive (Sunnyvale, CA), maker of GUI and load testing tools, also updated its portfolio with LoadRunner 4.0, which includes Web Test, a facility to capture and simulate HTTP traffic to a Web server. And last month SQA (Cambridge, MA) followed with the "Euclid" release of SQA LoadTest, also able to simulate HTTP traffic.

"By enabling load testing tools to generate HTTP messages and interpret a Web server's response," says IDC Heiman, "the testing tools stay out from between the Web server and the database. Thus they should do a good job of simulating real-world stress on Web-based applications."


Click on our Sponsors to help Support SunWorld


Resources


About the author
Barry D. Bowen is an industry analyst and writer with the Bowen Group Inc., based in Bellingham, WA. Reach Barry at barry.bowen@sunworld.com.

What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough
 
 
 
    

SunWorld
[Table of Contents]
Subscribe to SunWorld, it's free!
[Search]
Feedback
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-12-1996/swol-12-test.html
Last modified: