Seeing is believing: Using network visualization for capacity planning
Network visualization tools must do more than paint pretty pictures of your network. They must help you plan for its future. We give you tips for network modeling and tell you how to avoid implementation mistakes
In the network-centric world of Solaris, the network flows through the center of an enterprise like a river. Users have easy, powerful network access. But that end-user convenience makes maintaining a steady flow an upstream task. What's out there? How well is it working? And the million dollar question -- what will happen to it if a new application is deployed? All of this and more can be solved through network modeling as a form of capacity planning. But, don't be mislead by tools whose sole mission is to offer a graphical view of the network. Without the ability to use a model for planning, those tools may offer more glitz than value. (2,400 words, including sidebar, "18 tips for network modeling.")
A picture may be worth a thousand words, but is it worth tens of thousands of dollars when the subject is your network? Only if that picture is used for proactive capacity planning rather than reactive troubleshooting. The network visualization tool whose sole purpose is to paint a pretty picture of your network may not be worth the expense, many users believe. When the effort to model the network is used for capacity planning, however, seeing is believing.
"Some of these [modeling and visualization] tools cost $55,000. We kind of choked at that," says Steve Elias, network engineer for Merrill Lynch, in Princeton, NJ. "Even $20,000 or $30,000 may be too much, depending on the features. Also, a lot of time and effort goes into setting up a proper model for a piece of equipment. That's not what we wanted to do."
What Elias's team at Merrill Lynch wanted to do was what every reasonable network manager wants to do: accurately predict the affect new applications will have on the network before they go live. But sometimes the most common-sense goals are actually the hardest to achieve. In the case of network modeling, the drawbacks include gathering information from a decentralized, heterogeneous environment, building your own tool suites because the current market has yet to offer one that does it "all," and accurately testing the model so that its data is trustworthy.
With network-centric environments such as Solaris, understanding, managing, and ultimately predicting network traffic is a particular challenge, says Larry Ciraulo, staff network consultant for Sun Microsystems, in Milpitas, CA. Sun itself has recently taken a more proactive position by deploying network modeling tools for SWAN, the Sun Wide Area Network.
"Up until a few months ago, when we started on a big push for capacity planning, we didn't have a formal capacity planning methodology. We had no way of being sufficiently prepared for new applications. With Solaris and Unix in general, people have so much access to the network that they do a lot of NFS mounting in different servers all over the network. We see a lot of traffic [from applications] that we don't necessarily know about. In that environment, it's more critical to look at what's going on before you deploy," Ciraulo says.
The home base advantage
The first goal in taking the reigns of your network under firm control is to create a baseline model or a replication of a network as it currently exists. (See sidebar, "18 tips for network modeling.") "Until you know what's normal, you don't know what's not normal," summarizes Ciraulo.
Can you skip the modeling portion altogether and use a network visualization tool that gives an ever-updating picture of the current network? Yes, but you'd be giving up "what-if analysis," the major benefit that picture could offer. "Visualization tools seem to be hot -- stuff like virtual reality of 3D drawings of an abstract graphical picture or screen where the device is [depicted] with size and shape -- this line represents a WAN link, for instance. That stuff is hyped. It's a Mickey Mouse gimmick to me," condemns Ciraulo. "We take it to the next level: network bandwidth, pattern of usage, network infrastructure between apps. Our client/server network is distributed in branches with local sites. When we start adding sites, how does that affect the overall network picture?"
Questions like that are for "what-if" scenarios. That is, situations in which a network administrator has altered data to see how the model reacts. If a company was planning on deploying a new Lotus Notes application, for instance, network managers could simulate the traffic between the server and the clients and see the consequences this increase of traffic has on various aspects of the network. By using the baseline as a comparison, trouble may be identified before the application goes live. If a bottleneck is detected, what-if scenarios allow network managers to take a lot of the guesswork out of finding a solution. A bottleneck may be removed by adjusting a router configuration, shifting the server to a different network segment, or installing more bandwidth. Which of these is the most cost-effective solution? What-if scenarios can give a good indication. In fact, a good model answers four strategic questions, according to Henry Steinhauer, capacity planner at Hewitt Associates, LLC, a financial services firm in Lincolnshire, IL.
Build your own perfect tool
Several steps are required before that initial baseline is constructed and consequential what-if scenarios are applied. First, you must determine what data to gather, how you are going to get it, and which tool to use to build the model.
Ideally, the tool you use will have a "discovery model" that will automatically collect much of the necessary data using RMON (remote monitoring), SNMP (simple network management protocol) queries, installed Sniffers, and other techniques and use it to construct an initial model. Not all network discovery tools also perform modeling, however; and not all modeling tools have a discovery piece.
NetSys from Network General is an example of a discovery product, while NetArchitect from Zitel Corp. is an example of modeling product. Zitel officials say that NetArchitect was originally intended as a tool for new network design, a case when discovery wouldn't be an issue. However, now that NetArchitect has been aimed at network capacity planning functions as well, a discovery module is in development, Zitel officials say. Still, products are available in the market today that do both, such as the NetMaker XA suite from Make Systems Inc., of Mountain View, CA. If the modeling tool does not automatically seek and find data from other devices, you may have to manually input such statistics, a tedious and time-intensive operation.
Click image for
a larger version (17K)
Lack of a discovery module may not necessarily mean that the tool should be eliminated altogether. In fact, experienced capacity planners say that the first preconception those new to the game need to abandon is the quest for the perfect, all-inclusive tool. The truth is, no one tool will do everything you want or need. Many network administrators find themselves kludging together features from one tool to another or writing custom applications to fill in the gaps.
"Where a product like Zitel's NetArchitect comes into play is that it allows you to model at a number of different topologies. If I have a workstation connected through this router connected to a client, and I don't care about delays between the client and the server, I can adjust for only the information I want to see," says Steinhauer. "Typically a product will have a single strength. It's hard for products to have multiple strengths. You will not find a panacea. There is not one company that has the whole thing so you need to work with companies that work with each other."
Fine tuning data collection
Even if the tool purports to discover your network for you, don't expect data collection to be as simple as that. First of all, these discovery modules can only see what they are empowered to see. Devices that support RMON but are not turned on, or devices that are not recognized by the tool will not factor into an automated picture.
Also, look for tools that ease the modeling itself. Modeling can be like rocket science -- complex to understand and perform. Ideally, the tool will include a library of models from which yours can be customized. However, such customization is only of value if the library includes the key pieces of your network, says Ciraulo.
For others, the ability to import and export data is among the most critical features. "Integration and an open interface is important. Can I export so that I can bring its data into other tools. I will always want to analyze in ways they can't support -- I don't want them to create it for me. Don't tell me it's proprietary. That's cutting your nose to spite your face," Steinhauer says.
Once the model is built, the value of it increases as it is used and understood. Probably the biggest error made by those new to modeling is to attempt to include too much detail, say some industry experts. "One of the mistakes that I see a lot is when people try to model it down to too small a detail. If a server has 20 different transactions they include all of them when only two are taking up 60 percent of the resources. In that case, you'd want to model one and two and lump all the rest in `other,'" explains Marilyn Kanas, director of software marketing for Zitel Corp.
Another failing is to rush into using the model before verifying the accuracy of the baseline. Data from the baseline should reflect peak times (such as 9:30 a.m. when the entire company checks its e-mail) and off times, (such as 4:50 p.m.). It should reflect seasonal traffic and be gathered over a thoughtfully considered time period, such as 24-hours a day for two weeks during slow season and 24-hours a day for two weeks during busy season.
Even still, how do you know how accurate your model is? Many ways. Among them is by meticulous testing. This requires generating a baseline that you feel confident about and representing a new rollout on it (using guestimate performance statistics from the application developer). The results your model predicted should be compared to those of a live test. Discrepancies should be explained. Another good bit of advice is to get help while you're learning. Many of the largest system integrators have capacity planning experience, and many vendors offer these services as well.
"With modeling, you do all this math and work, then do you really feel comfortable with the results?" questions Cheryl Haines, director of product marketing for Network General.
With time and experience, capacity planners can answer with a confident yes.
About the author
Julie Bort is the author of Building an Extranet (John Wiley & Sons) and a freelance writer in Dillon, CO. She wrote SunWorld's May 1997 cover story, "The wiser, gentler data warehouse." Reach Julie at firstname.lastname@example.org.
If you have technical problems with this magazine, contact email@example.com