Click on our Sponsors to help Support SunWorld
IT Architect

Testing, testing 1, 2, 3

Learn how to effectively and efficiently test your applications

By Driss Zouak

SunWorld
February  1999
[Next story]
[Table of Contents]
[Search]
Subscribe to SunWorld, it's free!

Abstract
Software testing is one of the most crucial, overlooked, and daunting assignments a developer can face. This month, IT architect Driss Zouak devises a useful testing methodology to help you and your testing team produce solid software without too much pain. (5,200 words)


Mail this
article to
a friend

Foreword
In the application development world, considerable emphasis is placed on the design and construction of software. This month, IT Architect tackles the problems associated with testing -- lack of interest, lack of time, lack of process -- by discussing different approaches to this very necessary stage of development. Our sample testing project is one that many technologists can relate to these days; it's an application to sell products over the Web. When projects cut testing phases, they are increasing the risk to the business deploying the application. Bugs in the field, software fixes done as maintenance instead of development, and having to implement "service packs" all take time and effort. These efforts cost money and have other implications to the people and processes within organizations. Will businesses continue to put up with this? Whether you're developing commercial software products or internal IT applications, this month we provide advice on how to produce solid, complete, fully operable software.

-- Kara Kapczynski

Testing is usually perceived as being the cruel drudgery that follows humbly behind the noble challenge of completing "the big code." Myths and rituals surrounding testing abound, literally. Most developers, while they don't want to be a part of testing, will readily agree that it should be done.

Most software developers first experience testing as a trial by fire: when the product they're developing hits a critical transition point or milestone, the senior developers of the team turn to the them (the juniors or new hires) and inform them that they are going to be part of the testing team. One senior developer sacrifices himself to lead the crusade, and off the team goes, somewhat blindly, into the wilderness with a bunch of printed out tests under one arm. Sound familiar?

My first practical experience was like that -- only we didn't have a leader. I was an intern for eight months at a telecommunications company. A month after I joined, the product's development reached an "alpha release" milestone. The development manager picked me along with the new hires for design verification (testing), gave us a stack of printed-out scripts to run through, some "testing" computers to work on, and off we went, stumbling forward.

Whenever we found a bug, we would enter it into a tortuously convoluted bug-tracking system. Whenever there was a new build, we had to restart our tests from the top. The ritual was boring, repetitive, and grueling. Every day we would skip more and more tests with justifications like "Oh, these two are similar enough" or "It wasn't broken before." Unfortunately, we had cut corners and not done our regression testing properly, so, of course, the product failed in the field with the first alpha tester. Worse yet, it failed in ways unanticipated in the test scripts because the scripts had been created by the developers after they finished the previous milestone; creating tests they knew might break the code they had written.

While we had some semblance of a testing methodology it was so devoid of managerial and developer support that it was worse than if we hadn't done any testing at all. It gave us a false sense of confidence in our product. There seems to be a perception that if the managers can allocate sufficient time and resources for testing, quality will automatically result. The problem is that often developers don't know how to go about testing.

Good testing practices result not so much from following a particular methodology, as from earnestly following the methodology you choose. At a recent project for a major computer manufacturer, we had two testing teams -- one team devoted to functional testing and one to performance and stability -- and not a lot of time for testing. The two teams used different testing methodologies: one facilitated testing on the front end and a black box point of view, the other focused more on the back end and a "whiter" box point of view. Both teams were successful in their tasks because they knew what they needed to accomplish, had a method for getting there, and knew how to communicate and analyze their findings.

In this article I'll provide a basic, straightforward method for doing, documenting, and thinking about testing. The main topics we'll cover are:


Advertisements

Breaking down the myths
One of the most common myths about testing is that size matters; that is, that the size of a project determines the level of testing. The truth is that the depth of testing and the amount of time devoted to it will depend on:

In other words, the depth and breadth of testing necessary depends on the project itself, not necessarily on its size. Clearly, a larger project needs more testing time to cover all of its functionality, but you should be testing throughout the software's lifecycle, which means you'll go through iterations of determining your test objectives, building your test plan, and executing those tests (all described below). Having the grand testing party at the end of a large project dramatically increases the probability of finding a "show stopper," a problem with the software that is too big to deal with.

Another common myth is that testing is intended to show that the software works. This is a reasonable idea for developers because software development is a constructive process and so we want to demonstrate that our application (or system) behaves as advertised. Software testing is actually a destructive act. We need to find every way in which we can confuse, break, maim, and crash the application. It usually requires a completely different type of person than does developing software. Testers are the ying to developers' yang; they provide balance, which results in better quality.

Consumer versus nonconsumer
Consumer applications usually have quality assurance (QA) teams, who are separate from the development team, devoted to testing them. QA teams aim at determining where and how an application is defective and communicate their analysis to the development team for repairs. Their purpose is to reduce technical support calls to the company, thus reducing costs. I believe consumer applications QA has matured farther than nonconsumer applications testing because the former directly affects a company's bottom line and reputation in the marketplace.

Nonconsumer applications, applications that aren't for public consumption (for example internal IT applications, business-to-business applications), usually don't have dedicated QA teams. Typically, the development team and the testing team are one in the same. I suspect this occurs due to a lack of regular testing work, or perhaps it's due to the corporate team structure.

As I mentioned earlier, most developers learn about testing methodologies by being on a team that uses one. The problem is that this is more of a peer pressure knowledge model: "No one else is doing it, so why should I? It's only going to make more work for the team." This model tends to rely more on tradition, thus not rewarding any extra effort to establish solid testing methods. In the end, developers usually feels they're just creating more unnecessary work for themselves.

Some companies, like Cambridge Technology Partners, are religious about sharing their knowledge and experience within the company. This allows for a cross between an elder knowledge model (sharing experience) and an innovator knowledge model (sharing ideas). The result is developers who leverage the testing knowledge and experience of peers they may never work with directly. These developers are then recognized by others as having valuable knowledge.

Types of tests
In my opinion there really are only two types of tests: black box and white box. The definitions of these two types of testing vary somewhat from software engineer to software engineer. In general, black box testing is where you test something without knowledge of its inner workings, only inputting through its interfaces and checking behavior by looking at the results that come back. For example, you would test a function by inputting parameters and checking the return value. You could also black-box test a Web application simply by interacting with the GUI and checking the values stored in the database and screens.

White box testing is different in that you test with full knowledge of the application's inner workings. The aim is to test the inner pieces by exercising every path through the code. If testing a Web application, for example, you would test the results of logging in with different user IDs. An if statement has two paths: one if the value is true, another if it's false. With some analysis, you can usually reduce the number of paths you have to check. Often some paths repeat or are mutually exclusive.

As developers, we need to test our objects and methods from the white box perspective, ensuring that every avenue through the code exhibits the correct behavior. I generally create small test programs that test my objects or methods to ensure they're working correctly. It's an important and responsible software engineering practice for developers to create such test programs for use with unit testing. It's also important to comment the code of these tests as they may be of value to the testing team later. At a later stage, black box tests will be created by the testing team concerned with interfaces and return values.

The fundamental difference between white box and black box testing is that black box testing can easily hide problems that white box testing will surely uncover. For example, black box testing can miss problems within groups of objects, as such problems cancel each other out and result in seemingly correct short-term behavior. White box testing would reveal such a defect, due to the misbehavior of a piece (or a number of pieces) of the group.

Where to start
The primary purpose of testing is to break the software: to find out how and when the software will break (so that the end user won't). Most teams perceive testing as being "that which comes after development." When development is almost finished, the testing team is assembled and begins testing however its sees fit. Here's a joke to illustrate that thinking:

Two guys, Jerry and Bob, decide one day that they've found the perfect cliff on which to build the new project for their company. They get some special cable, make sure it's properly fastened, add a winch, take a test jump with the cable attached, and then call the boss: "It's ready!"

The boss arrives, sees the installation, and smiles. He asks the two guys if they've solved the client's problem and tested it properly. Proudly they profess they did so that morning. The boss leaves and comes back the next day, stating that the client is ready for them.

The two guys, a bit confused, look around to see no one else. They confirm with the boss that the client is indeed ready, and the boss asks Bob to test the system. Bob attaches the cable to his leg and jumps yelling "Bungi!"

The boss is shocked and turns to Jerry, "Is this how you tested the system?" Jerry replies, "Yeah, why?"

About a minute later, to Jerry's surprise, he sees Bob bounce back up with a panicked look on his face. He grabs Bob and pulls him in.

Bob says, "Boss, the clients were at the bottom. They were swinging things at me!"

The Boss replies, "This is how you've been testing our piñata delivery system? Luckily that cable is defective!"

In this case, Jerry and Bob built the system and tested it so that they felt it worked. The problem was the scope of their tests didn't cover the functional requirements of the project.

The moral of this story is that testing should be thought about during design. A development team needs a testing leader who is responsible for making sure all tests and documentation are completed, as well as a development lead who is responsible for making sure the coding is done well and for making sure that the testing leader's tasks are factored when planning the use of shared resources (the developers). The rest of the development team should be responsible for ensuring they produce quality code with white box testing scripts and/or driver programs.

The testing lead should be involved in the design; this will help him understand what all the "moving pieces" are and how they interrelate. Others might argue that it's best for a testing lead to deal with the application as a complete neophyte (from a new user perspective). I believe that some tests benefit from the neophyte perspective and some require in-depth knowledge: both black box and white box testing.

Testing objectives
Now we start down the path to thinking about formal testing. First, we need to identify what our testing objectives are. We'll start with a look at the business and technical requirements as well as some specific cases. What are some of the key system functions? What business transactions are occurring? Does the application need to scale? What does scalable mean in this context? Is it the number of users? If so, how many is the app supposed to handle concurrently? Or does scalability mean the amount of bandwidth (data throughput) it has to be able to handle in a certain period of time? Or is it the size of the database or a particular table? What about reliability?

As you identify your testing objectives, determine their completion criteria. At what point will your team be satisfied that the objective has been met? Next, you need to prioritize the objectives relative to the risk a failure will pose to the success of the application. If the failure of one objective would mean users getting kicked off a Web site after 15 minutes, that is probably less important than the objective that states a user has to be able to login or be able to purchase an item. You may want to distinguish between major and minor objectives, where major objectives are required to be met in order for you to consider the application "releasable."

Of course, all of this has to be documented in your testing document. Throughout the remainder of the article, I'll use an example of an e-commerce Web site project to help illustrate the concepts and documentation style. As I mentioned above, when you're defining objectives, you also need to define their priority and their completion criteria. Here's an example:

Objective name Test objective Test priority Completion criteria
Item purchase Ensure a user can purchase items Medium Test conditions to be prepared and run to validate that users (new and existing) can purchase one or more items, and that the invoice e-mailed to them matches the amount owed in the database, and that those values correspond with the actual price of the items.
User login Ensure the user must login with valid password to obtain access to the Web site High Five (5) different users are tried with all combinations of correct/incorrect user ID and password. The system only allows logins with registered users with valid user ID and password. If the user types in a URL directly, he is forwarded to the login page.

Test plan
Once you've identified testing objectives, you can start building the test plan. Building the test plan should take place during the design and development phases of the project -- the application must be completely designed and partially built before you can complete the test plan due to dependencies on the implementation.

With the objectives and the completion criteria identified, you next need to determine how to attain the completion criteria. In order to determine this, you must dissect your application from both a technical and business point of view. For this, you should focus of the following types of tasks (this list is not exhaustive and is intended to serve as a starting point):

Doing such analysis will allow you to determine your scenarios as well as help you to create functional areas. Functional areas are clusters of code that are black box tested together to form those "smallest transactions." In defining functional areas, you need to specify what objects and methods (or what functions for procedural code) are included. Here's an example using our e-commerce Web site example:

Functional area name: Quick login
Objects/code involved: AuthenticationEngine object (methods: login, verifyUser)
UserProfile object (methods: getUserProfile)

Functional area name: Full login
Objects/code involved: AuthenticationEngine object (methods: login, verifyUser)
UserProfile object (methods: getUserProfile)
ShoppingBasket object (methods: retrieveForUser)
Billing object (methods: findOutstandingBillsForUser)

Once the functional areas are defined, you can begin building tests that will exercise them. The same functional area may show up in several tests, which is expected as they may be used in different contexts or with different data.

Here's an example of a test:

Test name: Simple login - 1
Purpose: Test a simple user, correct logging into the system.
Functional areas tested: Simple login
Input: User ID = "Driss", Password = "Bug"
Procedure: Kill all browser instances and then open a new browser.
Go to main home page.
Select the login button.
Enter user ID and password, click Enter.
Output: Screen should display "Hello Driss, you're logged in now."
Browser cookie is set with my name and favorite saying.
Automated script: Test Manually. Don't have an automated test tool yet!
Comments: None.

The next step is to create a test for an invalid login, with both an invalid password and correct user ID, and then with an incorrect user ID and incorrect password. This is to ensure that all paths are covered.

I bet you're thinking: "Boy, this testing stuff is a lot of work. I think my methods to this point are just fine." Some testing methodologies can be quite intimidating at first, especially if you haven't had a simple one to start with (like this one). Dedication to a method is necessary to make it a success, and a testing team needs to have a basic testing framework before it can start improving software.

At the analysis stage, you identified scenarios (user and nonuser). Now you need to define scenarios in terms of the tests. A testing scenario has a definition, which explains its purpose, and a list of tests, which are used to demonstrate the scenario is able to be successfully completed. Here's an example:

Scenario name: New impulse buyer with immediate logout
Scenario definition: This user scenario covers a new user coming to the site, registering, making an immediate purchase of an item (a TV), and logging out.
Tests used: New user registration (User ID = Driss, Address = nothing, etc…)
Simple login 1
Purchase a single item
Logout

Scenario name: Sending batch data to CIS
Scenario definition: This nonuser scenario covers the daily, automated action of the application to e-mail the day's purchases to the CIS system
Tests used: Bundle data for CIS
Send bundle to CIS
Confirm data in CIS
Delete data from database

By defining functional areas and then tests, often you find you missed defining some functional areas. By defining scenarios in terms of tests, you in turn see if you missed any tests. Next, you associate the scenarios with the objectives, which will reveal any missing scenarios. Here's an example:

Objective name Scenario name Comments
Login New user login 1 Do three times in a row
New user login 2 Do five times
Manual URL 1 Try with IE after logging in successfully under Netscape

Test execution
The last thing to do with regard to the testing is to actually execute the tests. For every new build, you'll need to execute all your tests, regardless of past successes and failures. This is called regression testing (and this is what I didn't do enough of in that first test I did as an intern). Keep in mind that bug fixes may break previously verified functionality.

The most important thing when testing is to keep track of the results, the factors that may have caused a test to fail, and whether the failure was repeatable. Was it repeatable on different machines and/or configurations? It's important to know.

Testing leaders will need to work with the development team to nail down what is actually failing within the application. Too often I see testing teams log a bug into the bug tracking system with messages like "There is a bug in the login." This comment is useless. You need to ask yourself, What user ID and password were used? When did the failure occur? What build number? This is the type of information you'll use in conjunction with the development team to nail down the cause of the bug.

It's important to note that bugs can be more than just an indication of programmatic incorrectness, they can indicate architectural design flaws, which can be extremely serious. Make sure to watch for patterns of bugs that may point to such a problem.

Use an automated testing tool for your tests (Mercury Interactive's LoadRunner or RSW Software's e-Test Suite work well). These tools will also allow you to organize and schedule your tests. Most tools come with standard reports that are created by adding specific reporting code to your scripts.

Automated test tool have their own set of myths, but I've found these tools to be very valuable. To get the maximum benefit out of your tool, figure out what you need to have your test tool do and report. What software components need testing (database, HTML, GUI)? I also recommend that you try out different tools. Most vendors offer time-limited (usually 30-day) copies on their Web sites, or will often express ship you a trial CD).

When the tests are completed, the testing lead should sign off on the software to indicate that the software has been completely tested and meets the testing criteria. If the software didn't meet all the testing objectives (or all the major objectives, if you used major and minors scheme), it's a judgement call as to whether or not to sign off and authorize release of the software. Even if the testing lead decides not to sign off, the decision may be overruled, but at least it will be clear who took the responsibility for the release. Remember, it's the test lead's job to help make crucial decisions, but test leads that never release software aren't helping anyone.

Tie up loose ends
Make sure all of your testing documentation is finished and that the automated test scripts and any helper programs you wrote to help test are safely stored. Most likely, someone will use these as templates going forward. Also, you may find yourself needing them again six months down the road while testing the new release.

One last piece of documentation that I recommend at the end of a project is to create a software engineering document that covers the results of the tests, as well as a listing of the limitations and intents of the system. This serves as a reminder that the system was designed for a specific purpose and was built and tested to satisfy certain criteria. By testing it beyond its specifications you're able to identify the failure points in the system. Consider a bridge, which is designed to support a certain amount of weight and compensate for a certain amount of erosion. For safety purposes, engineers will test a bridge well beyond the specifications. We should do the same to our software.

Debug your thinking
One piece of strategy that is often completely forgotten and that can undermine all the effort you put into a testing strategy is debugging. You not only need to understand how you're going to test the system, you need to know how you're going to debug the application. Often it's taken for granted that developers instinctively know how to do this. Most programmers rely on the debugging software in their development environment, but if you're building an application that involves concurrency, asynchronous communication, or is a distributed system, you're going to have a hard time finding bugs using a standard debugger. Know the limitations of your debugger and the type of debugging you may need to do.

One of the basic debugging strategies is to use a logger. A logger (which usually runs as a separate thread in an application) receives time-stamped messages of what's transpiring in the application. Of course, developers must insert the log messages when coding the application. Developers also need to agree on how verbose the messages must be so as to avoid similar functional areas having grossly different amounts of logging information. These log messages provide a trace of the internal behavior of the application, which can be vital in determining the source of a bug.

A good logger has the following properties:

Another debugging strategy to use in conjunction with a logger is a profiler, which can let you peer into your code while it's running. Profilers normally show you which threads are suspended, which ones are running, how much memory you're using, etc.

A truly alternative debugging strategy is to do proofs of correctness. This comes from the computer science side of software engineering. Proofs can be extremely helpful, but they require some practice to get used to.

Spending some time planning your debugging strategy will make development easier and testing more productive.

Conclusion
In essence, good testing practices come down to three basic concepts:

I use the method I have described above and continue to improve it over time. It has helped me tackle the first two items above, the third tends to come with experience. The greatest illusion that hampers forethought and preparation is that we can fix the application when it is in production and/or deployed. In my experience, it's almost always too late by that point.

My thanks to Bob Gagnon, a senior technology architect at Cambridge Technology Partners, for his thoughts and materials related to testing.


Click on our Sponsors to help Support SunWorld


Resources


About the author
[Driss Zouak's photo] Driss Zouak has been a developer for many years and specializes in the software engineering process. He is currently a senior technologist at Cambridge Technology Partners. Reach Driss at driss.zouak@sunworld.com.

What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough
 
 
 
    

SunWorld
[Table of Contents]
Subscribe to SunWorld, it's free!
[Search]
Feedback
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-02-1999/swol-02-itarchitect.html
Last modified: