|
Optimizing your http server softwareGet that Web server daemon screamin' |
A few minor changes to your server configuration files can make a world of difference in your server's performance. Take a moment to fine tune your server process before you waste another CPU cycle! (2,300 words)
Mail this article to a friend |
A Bit Of History
Before getting into the gory details, it helps to know how the current
crop of web servers evolved.
In the beginning, there were two web servers available: the CERN server and the NCSA server. They supported the same HTTP protocol but differed in their security and configuration options. Although the CERN server predates the NCSA server, the NCSA server's broader range of features made it the more popular choice. Last year, CERN got out of the server business and turned over the CERN server to the World Wide Web Consortium. Version 3.0 of the CERN server, released in August of last year, contains most of the features that had been in the NCSA server for some time prior.
As the NCSA server evolved, more and more features were added to support a broad range of webmaster requests. Some of those webmasters elected to create their own plug-compatible replacement for the NCSA server. Because their initial replacement was just a large set of patches for the NCSA server, they dubbed their project the "Apache server" (a "patchy" server, get it?). The Apache project relies upon the contributions of a large number of developers who contribute their time and effort to create one of the most advanced servers available in the public domain. The Apache server has since been completely rewritten from scratch using a modular architecture that makes expansion and customization easy.
Both the NCSA and Apache servers are in wide use on the Web and offer a set of features that let you tune the server for optimal performance; the remainder of this column will examine these features and how to use them.
The httpd.conf File
All of the parameters that control your server configuration are
contained in a single file named httpd.conf
. This file is
usually located in a directory named conf
within the
installation directory for the server. Any webmaster should be at least
aware of this file; you had to edit it to start your server in the first
place!
In the following sections, we'll be discussing various directions and options that can be placed into this file to control the behavior of your server. Unless otherwise stated, all the directives I discuss can be found in this file.
|
|
|
|
Running Standalone
The single most important decision you can make to ensure good server
performance is also the easiest. One of the first directives in the
httpd.conf
file is:
ServerType typeThis directive tells the server whether it will be running as a standalone application (with type set to
standalone
)
or as a daemon invoked by inetd (with type set to
inetd
).
A standalone server is started once. It begins listening on a specific port (usually port 80) for http requests. When a request is received, it is serviced, the results are sent back to the client, and the server resumes listening. The server only terminates when killed by the webmaster of when the system shuts down.
Inetd is a Unix utility program that listens to a range of ports on behalf of a large number of client applications. When a request is received, the application is started to service the request. In the case of httpd, when inetd gets a request on port 80, it starts httpd. Httpd services the request, returns the result, and terminates. When the next request arrives, a new copy of httpd is started.
You can see where this is leading. On even lightly loaded servers, an inetd-based server will spend a lot of time starting and initializing httpd for each request. This overhead slows the server down and reduces overall server response time. Standalone servers only initialize themselves once, and then linger to service many requests. In the long run, their overhead is significantly reduced.
In general, your server should always run as a standalone server. If your server is running as an inetd-based server, go convert to standalone mode immediately.
Server Pools
The initial version of the NCSA server used a simple servicing model:
for each request received, the server forked a copy of itself to handle
the request. This forked copy serviced a single request and then went
away.
In the same way that standalone servers eliminate the overhead of starting the server for each request, server pools eliminate the overhead of forking a new process when an additional request reaches the server. Instead of creating a new process for each request, the server creates a pool of processes when it starts up. When a request arrives, it is passed to the next available process in the pool. If all the processes are busy, a new process is created.
Server pools first came into general use when Netscape Communications released their commercial web server. The technology was quickly built into the NCSA server and inherited by the Apache server when it spun off from the NCSA server. Although both NCSA and Apache support server pools, they do so in distinctly different ways.
The NCSA Server Pool Model
The NCSA server uses a simple model for its server pool. Upon startup,
the server creates a pool with the minimum number of processes in it.
As requests come in, additional processes are created in the pool to
service multiple simultaneous requests. After an upper limit on the
pool size is reached, additional processes are still created, but they
terminate after servicing just one request. Thus, the size of the pool
never grows beyond the specified upper bound.
You control the pool size with these two directives:
StartServers n MaxServers nAs you might guess, you set the initial pool size with the
StartServers
directive and limit the pool size with the
MaxServers
directive.
By default, the NCSA server will create five server processes when it starts up. It will grow the pool to ten servers if needed. After ten servers, additional servers will be created and terminated as needed.
Setting these numbers is not difficult. Setting
StartServers
too low is not a big problem since the server will
almost immediately create extra processes if needed. Setting the value
too high may waste system resources, creating processes that consume
process slots and file descriptors but never service any requests.
Setting MaxServers
too high poses the same problem. If
your server occasionally hits a burst of activity, you could grow your
server pool up to the maximum and leave a bunch of idle server processes
lying about.
The real danger lies in setting MaxServers
too low. If
your system consistently requires more processes than
MaxServers
allows, your server will begin forking and
terminating an individual process for each request that occurs beyond
the pool limit. This effectively converts your server back into the
old-style forking server and needlessly consumes extra system resources.
For moderately loaded servers, the default values are acceptable for
these directives. For heavily loaded servers, you should consider
bumping MaxServers
to 20. The potential waste of resources
by idle processes is far less taxing on your system than the
overhead of forking additional servers when the load gets high.
The Apache Server Pool Model
The Apache server implements a far more elegant server pool whose size
is based upon the current system load instead of fixed upper and lower
limits. This pool is controlled by these directives:
StartServers n MinSpareServers n MaxSpareServers nLike the NCSA server, the Apache server will initially start the number of servers specified by the
StartServers
directive. From
this point on, the server checks the status of the processes in the
server pool every few seconds. If the number of idle servers falls
below the value set by MinSpareServers
, extra servers are
created, one per second. If the number of idle servers exceeds the
limit set by MaxSpareServers
, the extra servers are killed
off.
By default, the Apache server starts five servers, ensures that there are always at least five idle servers, and never lets the number of idle servers exceed ten.
This model dynamically creates extra servers as demand grows and removes them as things settle back down. The key parameter shifts from the NCSA's pool size to the extra capacity implied by the minimum number of standby processes. Thus, if your server load ensures that you always have five simultaneous requests, the Apache server will have ten processes running, five for the current requests and five more to handle a sudden burst in activity. When things slow down to where you are only servicing one request at a time, the number of servers drops to six, one for the request and five for the potential additional traffic.
With this model, very little tuning is needed. The only value you may
want to reduce is MaxSpareServers
. If your system
resources are in scarce supply, you can reduce
MaxSpareServers
to be closer to
MinSpareServers
. This keeps fewer idle processes around
after a burst in server activity.
Killing Old Processes With The Apache
Server
The Apache server offers one additional directive worth noting:
MaxRequestsPerChild nThis directive sets the maximum number of requests a child process can service. Once this limit is reached, the process dies and a new process is created by the server. The default value, 0, means that the child process will not have a request limit imposed.
For anyone who has ever dealt with a memory leak or corrupted server process, this directive is a blessing. A single child process that leaks memory with every request can soon consume a significant portion of your swap space, possibly bringing your system to a grinding halt. By having processes terminate as they age, you limit the damage these buggy processes can induce. Of course, in a perfect world the child process has no bugs. In the real world, subtle errors crop up all the time. The easiest way to eliminate those errors is to periodically restart the server, flushing out the problems and starting with a clean slate.
For my server, I run with this directive set to 200. That way, my server processes are constantly recycled, but I don't pay the cost of forking a new process too often.
Measuring Success
Unfortunately, it is somewhat difficult to see how well your system is
running as you change these parameters. The easiest and crudest tool
available is ps, which lets you see how many child processes
exist and how much CPU time they have consumed.
If you have lots of processes with little time accrued, reduce the
MaxServers
setting (for the NCSA server) or the
MinSpareServers
and MaxSpareServers
settings
(for the Apache server). Conversely, if you have a small number of
processes with lots of time used, increase these parameters.
The Apache server comes with a utility that analyzes the current state
of the server pool and generates real-time statistics, but I've had
little luck in getting it to work correctly. You should know that the
Apache server uses a scoreboard file to track the server pool status; it is
located in /tmp
and is named htstatus.??????
.
More on all this in a future column, if I can get it to work!
|
Resources
About the author
Chuck Musciano has been running Melmac for close to two years, serving up HTML tips and tricks to hundreds of thousands of visitors each month. He's been a beta-tester and contributor to the NCSA httpd project and speaks regularly on the Internet, World Wide Web, and related topics. His book, HTML: The Definitive Guide, is currently available from O'Reilly and Associates.
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-02-1996/swol-02-webmaster.html
Last modified: