Click on our Sponsors to help Support SunWorld
Performance Q & A by Adrian Cockcroft

How much RAM is enough?

Determining the right amount of RAM for your Unix computer requires a little detective work

May  1996
[Next story]
[Table of Contents]
Subscribe to SunWorld, it's free!

Taking a Web server as an example, the author discusses how programs and Unix processes use memory. He concludes with how to size the minimum RAM required to provide a specific level of Web service, and the effects of Sun's traditional April 1 prank on (1,700 words)

Mail this
article to
a friend
Q: I'm an ISP, using a Netra with Netscape 1.12 server. tells me I peak at about 10 to 15 http operations per second and recommends adding RAM. How much RAM does my Web server require? What factors are important? How do machines with only 32 megabytes benchmark at 50 to 100 operations per second, yet my Netra hits the wall at a fraction of that rate?

--Opless in Oshkosh

RAM use falls into three major categories.

  1. Memory fixed in place for a specific purpose.

  2. Memory used by processes.

  3. Memory used to cache files by the file systems.

Fixed memory
The kernel takes up fixed memory. In general it cannot be paged out and it grows as you add more things (e.g., processes, devices, and RAM itself) for the kernel to manage. Pages that wait to do I/O are also fixed temporarily, and the shared memory segments used by databases are often fixed as well. Processes running as root can use mlock(2) to lock parts of their address space. The kernel manages ready for re-use "free memory."

Process memory
Each process has an address space made up of many segments. Each segment can be private to the process or can be shared by other processes. Part of the segment may be present in memory, the "resident set" of pages. Some segments are mapped to files, some to devices, and some (usually the private ones) to swap space. For each process, then, there are the following measures:

Solaris 2 doesn't keep count of the relative amount of private and shared memory. By following the kernel data structures it is possible to build a kernel module that figures it out. The SIZE and RSS values are printed by the BSD form of ps. Kernel processes, such as fsflush, report zero.

% /usr/ucb/ps uax

adrianc    374  5.7  3.2 9012 2528 console  S   Apr 08 12:52 /usr/openwin/bin/X
root       604  3.7  1.1 1052  872 pts/2    O 13:08:01  0:00 /usr/ucb/ps uax
adrianc    498  1.7  6.312360 4916 pts/2    S 17:34:43  2:15 /export/framemaker
adrianc    464  0.9  1.3 3468 1000 ??       S 15:50:53  0:04 /usr/openwin/bin/s
root         3  0.4  0.0    0    0 ?        S   Apr 08  3:21 fsflush

Note the SZ column for the X server (pid 374 above) includes more than 1 megabyte of mapped framebuffer segments, but this memory is not made up of pages of RAM so it is not included in the RSS. The Creator3D framebuffer found on Ultra systems has a very large mapping that makes the process size appear to be unusually large (more than 500 megabytes). Use /usr/proc/bin/pmap (Solaris 2.5 and later) to see the segment list.


File memory
UFS and NFS cache recently read and written data in memory. When a process writes a file and exits, the process memory is freed, but the file remains in memory in case it is needed again. All files are treated the same way, whether they are data, code, or shared libraries. A new feature in Solaris 2.5 is a page in use by eight or more processes becomes much harder to page out. This tends to keep common shared libraries in memory. Solaris offers no measures to report on file memory use. The pages are attached to inode cache entries, so kernel modules can walk the data structures to figure it out. Unreferenced files will stay in memory until the kernel scans memory to replenish the free list and finds that the pages have not been used for a long time.

How a Web server uses memory
While there are many Web server varieties, Netscape 1.12 is sold most often with Netra. Netscape forks many copies of the http server process. Each copy handles a number of requests, one at a time, then exits. There are configurable limits to the number of requests, and the minimum and the maximum number of processes. When all processes are busy, and another request arrives, Netscape grows from its minimum up to its maximum count. It records the number of processes in the error log.

% grep growing /opt/ns-home/logs/error


Each Netscape process has an RSS of a little more than 1 megabyte. By walking through kernel data structures, we found that the private resident memory was about 440K. Everything else is shared or non-resident. The memory used by Netscape is therefore 1 megabyte for the first process, and 440K for each additional copy of the process. Compared to the typical transfer size of under 8K of data, it is clear data size is much less significant.

To service a transfer, the process reads the request, reads the indicated file, and writes the data to the network. TCP buffers some outgoing data. When the remaining data to be transmitted is less than 8K (tcp_xmit_hiwat) TCP will take the data and send it. The process finishes and can handle the next request. Larger transfers keep the process occupied longer.

So, how many server processes do you need?
This depends upon the round-trip time for the connections. When running on a LAN, or in a benchmark situation, each operation is so quick that every process can handle ten or more requests per second. When serving the Internet directly, each operation includes several round-trip times of a second or more, and several processes are needed to sustain a rate of one operation per second. My experience indicates that about 100 processes will handle peaks of about 30 to 40 operations per second on Using the Netscape default maximum of 32, you could be limiting your throughput to 10 to 15 operations per second. If Netscape's error log shows you are always at the maximum, and you have enough spare RAM, increase the maximum and you should see higher throughput.

If transfer sizes increase, operations take longer. We saw this recently on when SunSoft released the Java Workshop for free downloading. Our average transfer size went from 8K to about 20K overnight! The multi-megabyte transfers using http (I know, it would be better to use ftp) tie up a process for a long time, effectively reducing the pool of free processes for handling short transfers.

Where does the RAM go?
The latest releases of Solaris 2 provide a kernel statistics structure called system_pages. One way to see the values is shown below, measured on an 80-megabyte SPARCstation 5 with 4K page size.

% netstat -k | grep pp_kernel
pp_kernel 2198 pagesfree 10782 pageslocked 2252 pagesio 89 pagestotal 19722 

This shows that the kernel uses 2,198 pages (just more than 8 megabytes), there are 10,782 unused pages (more than 40 megabytes), 2,252 pages are locked in memory (9 megabytes), 89 pages are locked for I/O (360K), and there are a total of 19,722 pages. The system boasts 80 megabytes of RAM, but the boot prom and initial load of the kernel consume a megabyte, leaving about 79 megabytes.

The private size of common programs and shared libraries can be determined by walking the kernel tables. The sizes below (sorted by private size) were measured on an Ultra 1, which has an 8K page size. Older systems with 4K pages may use a little less RAM since sizes are not rounded up to 8K. These measurements are not definitive and were taken on a system with plenty of RAM. These values would shrink in a memory shortage.

     Size Resident   Shared  Private  Process                              

    3048k    1160k     648k     512k  /usr/lib/autofs/automountd
    2840k    1128k     644k     484k  /usr/sbin/cron
    2240k     952k     660k     292k  /usr/sbin/inetd -s
    2824k     976k     684k     292k  /usr/sbin/nscd
    2136k     932k     648k     284k  /usr/lib/sendmail -bd -q1h
    1968k     708k     544k     164k  /usr/sbin/rpcbind
    1632k     656k     528k     128k  /etc/init -
    1936k     740k     616k     124k  /usr/sbin/syslogd
    1480k     576k     456k     120k  vi
     232k     116k       4k     112k  sh
    1112k     532k     420k     112k  -csh
    1744k     680k     572k     108k  /usr/lib/nfs/statd
    1624k     648k     560k      88k  /usr/sbin/keyserv
    1456k     652k     564k      88k  rpc.rstatd
    1472k     664k     580k      84k  /usr/lib/saf/ttymon
    1448k     688k     612k      76k  in.rlogind
    1528k     728k     660k      68k  -ksh
    1360k     600k     536k      64k  /usr/lib/saf/listen tcp
    1648k     604k     556k      48k  /usr/lib/nfs/lockd
    2288k     708k     660k      48k  /usr/lib/nfs/mountd
    1776k     624k     576k      48k  /usr/sbin/kerbd
    1392k     628k     580k      48k  /usr/lib/saf/sac -t 300
    1504k     604k     556k      48k  /usr/lib/nfs/nfsd -a 16
     816k     356k     308k      48k  /usr/lib/utmpd
       0k       0k       0k       0k  fsflush
       0k       0k       0k       0k  pageout
       0k       0k       0k       0k  sched

Totaling this list (and you may not run all of them) I get about 3.5 megabytes of private memory used by system processes and daemons. Remember to account for extra copies of any shells. Shared libraries are common to all of these and increase the total by another megabyte or so, plus any window system libraries.

Remaining RAM is used as file system cache and to hold applications. Netscape ns-http daemons, each consuming about 440K of private memory, dominate this machine.

How many Netscape daemons should I run if I have 64 megabytes?
Use your own kernel size measurements, which vary by machine. I assume this system runs Netscape ns-httpds only. You need more memory to run cgi-bin scripts and a search engine.

Too much coffee can make your Web server busy
To wrap up I'll comment on the effects the special April Fools Day home page afflicted upon As measured by, the throughput doubled the historical average. I've updated the Java-based graph to show April 1 as well as today's data for comparison.

Click on our Sponsors to help Support SunWorld