Click on our Sponsors to help Support SunWorld
Sysadmin by Hal Stern

Automatic for the people

Use the automounter to chart a course through rough NFS seas

SunWorld
May  1996
[Next story]
[Table of Contents]
[Search]
Subscribe to SunWorld, it's free!

Abstract
Join us in a guided tour of the automounter. We'll discuss the different flavors of configuration files, called maps, and give copious examples of good and bad map construction. We'll dive into a bit of detail around the automounter's implementation, and invocation features that will let you fine-tune your map usage. A list of typical pitfalls and problems (and solutions), awaits you at the end. (3,900 words)


Mail this
article to
a friend

Network File System (NFS) configuration information is quite possibly the digital equivalent of a tribble (that's as in Star Trek, not the Sun Vice President who is distinctly analog). Client configuration data -- lists of servers names, paths to filesystems on those servers, and the local directory on which they are mounted -- grows nearly constantly as you add new packages, libraries, and users. In your efforts to keep a consistent view of the world on all systems, you can spend quite a bit of time conjugating /etc/vfstab entries that fully describe the set of remote filesystem resources used by each machine.

It would be fairly easy to make up a prototype /etc/vfstab for NFS clients and blast it out to each desktop. Unfortunately, such an approach makes the NFS client configuration complex, with two downside risks. The first is that increasing the number of NFS mounts increases the probability that a desktop machine is incapacitated when an NFS server crashes or becomes unreachable. If you can ensure that your NFS servers go down less than once a month, but you have 30 or 40 file servers accessed by the desktops, you're looking at (on average) at least one meltdown a day when you take the "mount everything" approach. What you need is a tool that minimizes the amount of client-side configuration work necessary while mounting the minimal number of filesystems needed at any time.

Automation for the desktop people comes in the form of the automounter. A standard part of Solaris and most Unix operating systems, The automounter uses centralized configuration information to reduce the client-side administrative load and maintains a working set of mount points instead of completing a mount to every NFS server that could possibly be needed by the desktop. As such, the automounter obeys the two rules of clean system administration: it enforces consistency and it does so simply. The automounter takes the simplicity theme one step further by unmounting filesystems that are not in use.

This month, join us in a guided tour of the automounter, starting with an overview of how it manages the client NFS mount environment. We'll discuss the different flavors of configuration files, called maps, and give copious examples of good and bad map construction. We'll dive into a bit of detail around the automounter's implementation, and invocation features that will let you fine-tune your map usage. Finally, as has become habit in this space, we'll provide a list of typical pitfalls and problems, with suggested solutions. We're also going to assume you're running Solaris 2.3 or later, since the automounter went through its last major architectural change in that release, but we'll point out differences for older automounters where appropriate.


Advertisements

The name game

NFS clients build on the original Unix concept of a shared filesystem name space by adding access to remote filesystems. All filesystems, local or remote, live under the root directory (/) in a single tree structure; there aren't drive letters or volume names or mini disk packs to worry about. The /etc/vfstab file contains the list of every filesystem mounted to construct this name space, including the root filesystem and other local disks. For a local disk, the /etc/vfstab entry contains a disk device name and the path name on which it is to be mounted; for an NFS mount the /etc/vfstab line contains a server and remote path name pair:

/dev/dsk/c0t3d0s4    /dev/rdsk/c0t3d0s4	/usr      ufs	1	no	-
bigboy:/export/home  - 		        /home     nfs   -       no      -

Obviously, /etc/vfstab can become a major time sink when you need to maintain several hundred machines that access hundreds or thousands of network-accessible filesystems. NFS lets you export part of a filesystem to a client, so even if you have a few dozen servers, you may have to worry about several hundred client mount points. Remember rule #2: simplicity wins when you only show clients the minimal data set. The corollary to rule #2 is that your most vocal client will demand access to a filesystem subset that is not mounted when your work queue is the longest.

The automounter attempts to solve these problems by standardizing and streamlining the client-side mount process. The automounter consists of two parts: a user-level daemon that masquerades as a bona fide NFS server, and a kernel-resident module called autofs that actually completes the mount operations. Previous to Solaris 2.3, there was no kernel component, and the user daemon did all of the work, resulting in some pathname ugliness that made automounted filesystems a bit less pleasant to use. We'll revisit the automounter internals after taking a detour to look at the mechanics of mounting a filesystem and how the automounter inserts itself into that process.

The key trick employed by the automounter is impersonating an NFS server. The user-level automounter process, automountd, (automount before Solaris 2.3) looks like a full-featured NFS server that is running on the local machine. Local NFS client code sends NFS RPC requests to this pseudo-server, where they are analyzed to determine what filesystem is needed to satisfy them. There isn't any real NFS server code in the automounter process; it knows enough about the NFS protocol to intercept the first reference to a filesystem. Once the automounter has mounted the necessary filesystem on the local machine, all future references go directly to the NFS server and bypass the automounter.

If an automounted filesystem is not used for five minutes, the automounter tries to unmount it. The unmount request isn't always successful, because there may be no on-going activity but the client may hold a file open on the mounted filesystem as a current working directory. But in most cases, the unmount removes an idle filesystem, typically after the client is done using the library or tools on that volume. Pruning idle mounts reduces the probability that a client gets hung when one of those servers goes down. Using the automounter encourages mount point proliferation, but you want as few of them active as possible at any one time. Consider the following scenario: a user accesses a copy of Frame for an hour on Monday, mounting the Frame distribution on her local machine. On Thursday, the publication tools NFS server, containing Frame, crashes, taking your user's desktop into a zombie existence filled with "NFS server not responding" messages. Your user is livid because she hasn't touched that server in three days. With the automounter, it's likely the Frame distribution would have been unmounted sometime late on Monday, making the crash of the NFS server immaterial to the user's desktop.

To see how the automounter knows when to get involved, we need to delve deeper into dynamic mount information maintained by the kernel.

Sermon on the mount

NFS uses an opaque pointer to a file called a file handle (see "A file by any other name" September 1995 SunWorld Online), so one of the first references to a file system has to be an NFS request that discovers a file handle or asks for statistics on a known handle such as the root of the mount point. Put another way, the first NFS request made on a newly mounted filesystem is going to be a getattr, lookup, or readdir. The automounter intercepts these requests, completes the mount, and then passes the request on to the remote server. All future requests reference file handles and information derived from the remote server, so they no longer need to bother with the automounter. If you think the hand-off between client NFS code, user-level automounter daemon, mount completion, and request forwarding sounds painfully slow, it is. One of the main reasons the automounter's performance is tolerable is that it only gets involved in first-time requests and mounts, not every NFS request.

When any filesystem is mounted, an entry is created in /etc/mnttab to indicate the mapping between a physical resource, such as a disk or a remote server specification, and the local filesystem mount point:

bigboy:/export/tools		/tools	nfs	rw,intr,nosuid,dev=2200025	827184583
auto_home			/home	autofs	ignore,indirect,rw,intr,nosuid,dev=2180002	826900214

The last entry in the example is one put into /etc/mnttab by the automounter. It advertises the parts of the name space it's managing by making entries in /etc/mnttab that resemble real-life NFS servers, with a few minor differences:

Let's walk through both a typical NFS request and another one in which the automounter does its thing. From the first sample /etc/mnttab line, we know that /tools is already mounted from server bigboy via NFS. So an attempt to open /tools/frame/bin/maker begins with the local client looking up frame in /tools. Seeing that /tools is a mount point, the kernel bundles up the lookup operation in an NFS request that is sent to server bigboy. If the directory exists, a file handle is returned, and the lookup process continues with the next pathname component. For more details, see "A file by any other name" September 1995 SunWorld Online.

Now look at the second /etc/mnttab entry, for the automounted /home directory. This is an indirect map, meaning that the mount points occur in the /home directory as opposed to on it. When the first reference is made to any directory in /home, such as /home/stern/pubs, the kernel again starts with a lookup of the first component. In this case, it's finding stern in /home. Since /home is a mount point, the request is sent to the server specification in /etc/mnttab, namely, the autofs filesystem. Again, in earlier versions of Solaris, the request would be sent directly to the automounter daemon, using the port number listed in the /etc/mnttab file.

Whether it's via autofs or a direct call, the user-level automounter daemon leaps into action at this point, deciphering the NFS lookup RPC. To find the filesystem to be mounted on /home/stern, the automounter consults the map file for /home, matches the appropriate configuration line, and completes the mount of sunrise:/export/stern on /home/stern. The NFS lookup RPC is passed on to sunrise, and the new mount is reflected in /etc/mnttab so that future requests and references go directly to the NFS server. Note that other first-time references to subdirectories of /home will again tickle the automounter; if the user on that machine wants to look at /home/sue/archives, then the automounter will intercept the lookup of sue in /home and complete another NFS mount. If the user leaves /home/sue inactive for five minutes, the automounter unmounts it, removing the entry from /etc/mnttab.

It's easy to see how the automounter enforces a "working set" methodology on NFS mounts -- when a filesystem is needed, it gets mounted, and when it's quiescent, it's unmounted. It sounds simple because it overlooks the difficult system administration task involved in getting the automounter running well -- creating the map entries that describe your filesystem name space.

Network cartography

Maps come in two basic flavors: direct and indirect. A direct map describes a set of unrelated mounts such as /usr/share/man and /var/mail. The automounter manages the directories on which the individual mounts are placed, making a direct map the simple equivalent of a series of /etc/vfstab lines. A direct map has the following format:

/var/mail       mailhub:/var/mail
/usr/share/man  -ro shareserver:/usr/share/man
/usr/local/bin  -ro shareserver:/usr/local/bin

Note that each line starts with an absolute pathname, and that any necessary NFS mount options are included mid-line, as they would be in /etc/vfstab. Direct maps are useful for the handful of mounts that you want on every desktop, but have no regularity in their naming or location.

An indirect map is more flexible and the more common automounter application. The entries in an indirect map get mounted under a single directory, such as the stern and sue filesystems mounted under /home in the examples above. When dealing with an indirect map, the automounter manages a directory of mount points instead of a set of directories that are mount points. An indirect map for /home looks like this:

stern		sunrise:/export/stern
sue		divi:/export/sue
frank		monmouth:/export/home/frank
darren		satire:/local/darren

The first entry on each line is known as the key. Note that there is no regularity required for the server:path components. The automounter will implement this map as a /home directory, with four subdirectories named after the four keys. If a user attempts to access a subdirectory not in the map, say, /home/butch, the standard "No such file or directory" error will appear.

How does the automounter find these maps? By default, the automounter looks for a special map called /etc/auto_master that contains a directory of direct and indirect maps. The master map doesn't follow the naming conventions for either direct or indirect maps, instead it lists the automounter-managed mount point and the map name, followed by any NFS mount options:

/home		/etc/auto_home
/tools		/etc/auto_tools	    -ro
/-		/etc/auto_direct

Since there's no common parent directory for the mount points in a direct map, it uses the placeholder /- in the master file. Looking at the master file, you may think you've traded one set of headaches for another -- instead of managing /etc/vfstab on each client, now you have to deal with several automounter maps. Don't panic. Each of the automounter maps can be turned into an NIS map or an NIS+ table, allowing centralized administration of all common mount information. To indicate a map comes from NIS or NIS+, drop the /etc prefix on the map name. For example, the following auto_master fragment picks up all three automounter maps from NIS:

/home	auto_home
/tools	auto_tools
/-	auto_direct

One warning about creating indirect maps: You can't merge entries from more than one map under a single parent directory. If you want all of your users from all home directory servers mounted under /home, you need to create a single auto_home map that describes all of the servers and mount points. You can't have one per server, because each indirect map is connected to a unique automounter-managed directory.

A wild hierarchy

There are a few short cuts and tricks that let you create concise and powerful automounter maps. The first is using key and default entries; the second is using a hierarchical automounter map to construct a mount tree from several unrelated servers. Default entries are ideal when you have a high degree of regularity in your directory naming but need to include a few irregular entries as well. Consider the case where most of your users are on server homebase, under /export/username, but you have a few users from another group that use suitcase as their server. Here is a shortened auto_home map for this scenario:

fred	suitcase:/export/fred
nancy	suitcase:/export/nancy
*	homebase:/export/&

Using a * for a key means "match any entry," and the & that appears in the server specification inserts the value of the key used. If you reference /home/thurlow through this map, the wildcard key takes on the value thurlow, and the mount is attempted from homebase:/export/thurlow. When using default lines, you can add new users without having to modify a single automounter map, as long as the new users fit into the existing default naming scheme.

What do you do when you want to create a common directory with a direct map entry but you need to assemble pieces from several servers? You can't merge multiple map files together under one mount point, but you can specify multiple components for a single directory tree using a feature known as a hierarchical mount. The hierarchical map entry is a variation of a direct map entry that continues across several lines:

/usr/local	\
	/	toolbox:/export/local
	/bin	toolbox:/export/local/sun4/bin
	/man	docubox:/work/man
	/lib	buildserv:/projects/lib 

The root of this subtree is mounted from toolbox:/export/local. Any directories in that filesystem "show through" after the other three components are mounted on top of it. The bin subdirectory is picked up from a sun4-specific spot, while the man and lib subdirectories come from other servers. The net effect of the hierarchical map is that /usr/local has a complete set of subdirectories, assembled at mount time from disparate spots on the network.

Now for another twist: the sun4-specific directory is only useful when you're running on a sun4 desktop. What do you do when you want to use the same automounter map on multiple machines with varying architecture-specific subdirectories? The easiest solution with the automounter is to include a variable name in the server specification:

/usr/local	\
	/	toolbox:/export/local
	/bin	toolbox:/export/local/$ARCH/bin
	/man	docubox:/work/man
	/lib	buildserv:/projects/lib 

The value of ARCH is substituted into the mount request. By default, the automounter defines the variables OS and ARCH, and you can add your own environmental flags on the command line.

Tales from topographic oceans: Sorting out the options and problems

After that brief digital cartographic lesson, it's time to examine the broader problems of filesystem topology: how everything lays out in a single surface. The automounter's user-level daemon is started at boot time in /etc/init.d/autofs, or S74autofs if you look in /etc/rc2.d. This puts the automounter near the end of the start-up process, after the NFS client side services and before the printer subsystem. Without any options, the daemon looks for the auto_master file in /etc or in NIS/NIS+. There are many options. Here are the interesting ones:

As always, there are a number of pitfalls of which to be wary. Before Solaris 2.3, the automounter lacked the in-kernel autofs component. While autofs completes the mounts on the desired mount points, the older automounter had to forge the name space entries from user-land. The automounter did this by creating a symbolic link from the desired mount point to a staging area, typically /tmp_mnt. All of the common symbolic link problems plagued this approach -- users could "fall over" the edge of the linked entries, tools that relied on the value of the current working directory returned something prefixed with /tmp_mnt instead of the pathname that would hit the automounter, and users felt a non-negligible performance impact from the frequent symbolic link resolution. The kernel resident autofs keeps things neat.

Here are some other warnings and bits of war history:

The automounter represents one path to reduced client desktop administration. System managers hesitated to use it when first introduced nearly eight years ago, due to quality, Sun-specific issues, and performance problems. Today, the automounter runs on nearly every Unix platform, and its value in reducing the cost of desktop management is palpable. Next month we'll look at some more complex automounter problems, including dealing with hosts that sit on multiple networks, bindings to replicated servers, and dealing with new symptoms of old NFS problems. Until then, start thinking about the automounter as a way to put your NFS clients on automatic pilot.


Click on our Sponsors to help Support SunWorld


Resources


What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough
 
 
 
    

SunWorld
[Table of Contents]
Subscribe to SunWorld, it's free!
[Search]
Feedback
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-05-1996/swol-05-sysadmin.html
Last modified: