Measuring Web server performance

Tips for making your Web server as fast and efficient as possible

By Ed Tittel

September  1997
[Next story]
[Table of Contents]
Sun's Site

In this follow-up article to SunWorld's September 1996 "Benchmarking the Web," feature story, we examine approaches to measuring Web server performance, including pluses and minuses of benchmarking. We also offer suggestions for making your Web server run more effectively. (2,900 words with sidebar, "SPECweb96 revisited one year later")

Mail this
article to
a friend
Ultimately, performance must be defined as the capability of a system to do what needs to be done, as quickly and efficiently as possible. When it comes to Web servers, specifying what performance is about can be somewhat dicey. But it's an interesting exercise to attempt, simply because an ever-increasing number of companies and organizations depend on Web servers as vital elements in their efforts to communicate with the world at large. In an environment where 1000-plus percentage growth has been the norm for the past five years, delivering one's information goods through this medium is no longer a luxury, but rather, a necessity for most organizations.

Measuring Web server performance really means measuring the ability of a particular server to respond to client requests for its services. This sounds simple, but given that network latency -- the delay inherent in moving requests from clients to servers and their concomitant responses from servers to clients -- can be quite long on a network of global proportions like the Internet, adequate or accurate measures of performance can be difficult to produce.


Approaches to measuring performance
When it comes to measuring server performance, there are two basic approaches one can take:

  1. By examining the amount of data delivered by a particular server over a particular period of time, it's easy to calculate what volumes it is handling and how it works under that specific set of real-world circumstances. This technique, alas, suffers from over-specificity in that it applies only to the server being measured, over the time it's being monitored. It's also extremely difficult to compare the efficiency of the Web server that XYZ Corp. uses to dish up press releases and product spec sheets to the world's widget fanciers, with the output of ABC Inc.'s Web server that provides access to its database of pharmaceutical research information only to its paying customers (with authorized accounts and passwords). Measurement of real performance is accurate and timely, but not at all general.

  2. By executing some kind of standard benchmark, it's possible to compare how a Quack9000 eight-CPU server compares to an MBI six-way RISC machine. Within reason, both machines will run the same workload and execute the same sequence of operations over time. Because the only thing that's supposed to differ between them is the hardware (and possibly, the operating system and Web server software as well), benchmarks make it easy to say which of the two machines is faster or more efficient. The problem here is that no matter how well-researched and informed the benchmark might be, it has to diverge to some degree from the real-world workload that any random user community would inflict on that server, either to grab XYZ's latest Widget Watch newsletter, or to peruse the metabolic uptake of some new antihistamine in ABC's drug testing results database.

In actual practice, most smart IS professionals tend to pay attention to both approaches: They'll gather what anecdotal evidence they can from other IS professionals, trade shows, and even vendors about specific implementations (and the more like one's own environment they are, the better); but, they'll also pay scrupulous attention to what benchmarks are available to try to narrow their options when choosing between apparently equal systems options.

The benefits -- and downsides -- of benchmarking
Today, benchmark options for Web servers are somewhat limited: Other than the Standard Performance Evaluation Corporation's (SPEC's) SPECweb96 benchmarks, most of the other options available were developed by platform vendors like Silicon Graphics, Hewlett-Packard, or Sun Microsystems, or by Web server providers like Netscape or O'Reilly & Associates. No matter how objective such benchmarks may be, their origin makes them suspect to those who do not partake of the particular vendor's wares -- hard or soft though they may be -- who built the benchmark in question.

Unfortunately, while SPECweb96 provides a level playing field for vendors to compare results across multiple platforms or server software versions, it has already fallen prey to the forces of history. Although this benchmark is designed to remain consistent, the workload that it uses for measurement purposes no longer matches current real-world workloads as closely as it might. That's because the model of Web document requests that drove SPECweb96 assumed that all clients requested were static Web pages, while the Web has become increasingly dynamic over the past year. In fact, the evidence points to the Web becoming much more dynamic, especially with Dynamic HTML, Java, ActiveX, and numerous other Web technologies aimed at upping the interactivity of Web discourse becoming more widespread. (For more information about SPEC's activities in this area, see the sidebar, "SPECweb revisisted one year later.")

Until the benchmarking wizards at SPEC (or elsewhere) can catch up with the kinds of workloads that real Web sites everywhere must contend with today, the best kind of information that such benchmarks can provide is tangential. Within the limitations of what they can measure, such benchmarks permit savvy IS professionals to compare how one configuration stacks up against another. But the value of this comparison is severely hampered by its diminishing relevance to real-world situations. We'd recommend using it in the final stages of the selection process, only when all other factors appear equal between two distinct configurations.

The magic -- and mayhem -- of measurement
The other side of the performance assessment coin is to measure what's going on in a particular set of circumstances. Here, conventional wisdom dictates that IS professionals follow a five-step plan to try to deal with increasing demand for information services of any kind:

  1. State the operational requirements: Set down what information the Web server is to deliver, to what audience, over some specific period of time. It's important that you understand what's required before you can assess what changes or enhancements might be needed.

  2. Monitor, collect, and evaluate performance metrics: Using Web server logging facilities and OS-level performance monitoring tools, obtain information about what the system is doing and how it behaves under the loads it experiences.

  3. Analyze the data, identify causes, and isolate bottlenecks: By definition, a bottleneck is any factor that limits system performance. Performance monitoring data can only suggest possible bottlenecks; hard-boiled analysis and real detective work is necessary to identify causes. But once a bottleneck is identified, it can be addressed -- if only by replacing an older, slower system with a newer, faster one.

  4. Set measurable objectives and implement changes: Once bottlenecks are identified, outright cures or workarounds are within reach. It's important to state explicitly what kinds of effects system changes should provoke or cause. It's even more important to state them in a way that can be measured objectively.

  5. Forecast the effects of changes: It's important to state what results a change should produce and to compare actual against anticipated results. This is the only metric that can measure success.
This is the kind of recipe that never ends. Ideally, step 4 should feed back into step 2, to make sure that the effects of changes can be appropriately measured. Likewise, step 5 should feed back into step 3, if only because eliminating any bottleneck only causes the next most limiting system factor to make itself felt.

The limitations of measurement are based on effort and implementation. For one thing, no real-world situation can be measured until it's been fully implemented. Only then will the real Web server be illuminated by actual user behavior and demand. Likewise, no implementation comes without effort: it takes planning, training, and elbow grease to turn a plan for a Web server into a working runtime environment. Most IS professionals prefer to have some idea whether or not a proposed solutions is workable in advance, rather than being forced to rely on the "try it and see" method.

The best and worst of both worlds
Of course, the two critical factors for each approach explains why both benchmarking and measurement remain equally important. Benchmarking's critical limitation -- that it models real-world behavior with only some degree of success -- dictates that benchmarking results must be offset by research into similar applications already in use. This means looking at what one's competitors and colleagues are doing, but it also means listening to whatever scuttlebutt is available on the street (and in the trade press). As a last resort, outright testing may be needed (but this usually happens only when the planned investment is quite large).

Likewise, measurement requires real systems to be deployed in real situations. But this must be mitigated by analyzing available benchmarks and by using the seat of one's pants to guesstimate the differences between whatever reality is modeled by a benchmark and the reality that any particular Web server is likely to encounter from its user community.

Other people's measurements and conventional wisdom about what works and what doesn't will always play a role in the process of selecting (or assessing) particular combinations of hardware and software.

What's known about contributing factors?
Given the dialectic between benchmarking and measurement, a surprisingly useful body of knowledge about Web server performance is available, if one is only willing to read the research carefully -- and then sometimes between the lines. The following set of aphorisms sums up the best of what we've been able to glean from ongoing research into what makes Web servers as fast and effective as possible. (A link to the original work, "Benchmarking the Web," that led to this list is included in the Resources section below.)

All of these recommendations will do some administrators some good; no one situation will be able to employ (let alone benefit from) all of them. Try the ones that cost the least first and move on from there. Somewhere in this list is at least one tip that can add to your server's ability to do its job.

The final test of performance
The real touchstone for performance is whether or not your users can get what they need from your Web server. Some of the performance enhancements we recommend are more expensive, time-consuming, or resource-intensive than others. We count on your discretion -- but also on your need to satisfy user demand -- when it comes to choosing which approaches will work best for you. Some of our suggestions (for example, disabling reverse DNS lookups) make sense, no matter what your circumstances might be.

Others (for example, switching from CGIs to Web server-specific APIs) may involve tradeoffs that you don't want to make. Only you can decide what to try, but only subsequent measurement can determine if the changes you make produce the desired results. That's why our closing recommendation is: Don't forget to check your work!


About the author
Ed Tittel is a principal at LANWrights, Inc. an Austin, TX-based consultancy. He is the author of numerous magazine articles and more than 25 computer-related books, most notably "HTML for Dummies," 2nd Ed., "The 60-Minute Guide to Java," 2nd Ed., and "Web Programming Secrets". Check out the LANWrights site at Reach Ed at

What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough

[Table of Contents]
Sun's Site
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact

Last modified:

SidebarBack to story

SPECweb Revisited One Year Later

A development effort that originally received mention in 1996 on the SPEC Web site as SPECweb97, is now known internally at SPEC as SPECweb98. Indeed, SPEC's current plans are to deliver a new set of Web benchmarks some time in the first quarter of 1998. While some observers might be tempted to call this "a slip," it actually represents significant and welcome enhancements that will appear in SPEC's next set of Web benchmarks.

Gyan Bhal, the chair of the SPECweb committee, explains his group's efforts as an attempt to expand the original set of Web benchmarks "to keep pace with the rapid pace of change in Web technologies and the sites that use them."

The original SPECweb96 benchmark relied completely on static pages for its model of activity. This, according to Bhal, "no longer represents a typical workload for Web activity" (and all the experts we consulted share his opinion). This change in workload required the SPECweb committee members to design a new, more dynamic set of activities for the benchmark.

To answer the need to match more dynamic Web server behaviors, the committee is building a new set of benchmarks that uses dynamic HTTP GETs. These GETs use CGI scripts to generate content on the fly. At present, the CGI scripts are implemented in Perl. However, vendors will have the opportunity to use their own APIs as well. Given recent research on the process overhead involved in running CGIs, this latter allowance may help Web server vendors to produce faster benchmarks, and help them more accurately model dynamic Web activity.

Currently, Sun SPARCstations of many flavors, HP workstations, and IBM RISC/6000 machines are the primary focus for development. But Bhal is quick to point out that "as always, SPEC's intention is to create a platform-neutral set of benchmarks." To that end, the SPECweb committee is using completely standard Perl 5.0 for its programming efforts and is "determined to create the most portable Web benchmark possible."

Other items under consideration for implementation in the upcoming SPECweb98 benchmark are also quite interesting. Currently, such items include:

Not only is this an ambitious agenda, it's one that appears particularly well designed to provide a more realistic model of the workload that Web servers need to handle most effectively. "The more real world our benchmark can be," adds Bhal, "the better it will do its job of providing realistic measurements of -- and comparisons among -- individual Web servers. That's the primary goal that drives the SPECweb effort."

Based on our own research into the state of the Web, the only element that's missing from the group's agenda is one that incorporates some support for vendor-specific Web server APIs. One unanimous conclusion among all Web performance researchers has been that CGI or other Web extensions that run in processes separate from the Web server add significantly to the processing overhead involved in making the Web more interactive. But given that both Netscape and Microsoft, among others, have distinct, proprietary, and incompatible implementations, we have no trouble understanding that SPECweb may not have the energy or resources to tackle this kind of problem. And that, of course, is why the SPECweb98 benchmark will also permit vendors to substitute their own APIs in place of the Perl scripts that SPEC will provide.

But when it comes to Web server-specific APIs, we feel compelled to issue the following warning: When comparing standard SPECweb98 benchmarks against vendor-constructed "equivalent implementations" that use proprietary APIs remember that these do not transfer across Web server platforms. Consider this: once you cross over into any vendor's proprietary Web development environment, you've made a commitment to a way of doing things that may be harder to break than it was to create in the first place!

For more information about SPECweb and the Standard Performance Evaluation Corporation (SPEC), please visit its Web site at

SidebarBack to story