|
A third in fourths: An NFS potpourriAnother pass through automounter and NFS |
We're not quite ready to let go of the automounter yet. Even two month's worth of automounter secrets doesn't completely cover it. So in this third installment, we examine sticky automounter situations and repair tactics and then discuss NFS authentication and Sun's WebNFS. We also share some methods for auto-sharing and automounting removable media such as CD-ROMs. (3,300 words)
Mail this article to a friend |
We've gently eased into the summer months (or
winter doldrums for our Southern readers) with a two-month discussion
of the automounter and its configuration. By now we hope that you can
build maps, install and customize them to your heart's content, and
perhaps you've justified the need to change
In order to enable you with powers to perform the more spiritual acts of system management -- those that actually elicit kind words from impressed users -- we'll take this third pass through the automounter and NFS. We'll start by getting stopped, looking at the various ways the automounter can get stuck, and what you can do to repair the situations. From automounters that don't work we'll move on to those that work too well, giving root-enabled users access to home directories and mail spools of unsuspecting users. We'll introduce NFS authentication to get you started on the path to a solution and take a peek at WebNFS, Sun's filesystem for the Internet. Our final section deals with auto-sharing and automounting removable media like CD-ROMs. Armed with a four-corner offensive arsenal of expert tricks, you should be prepared for anything the NFS-ready masses throw your way.
Tracing the mystery: Debugging mount attempts
Any system with hidden moving parts is bound to break. Create some
complex automounter maps and you're likely to find unexpected behavior
or get unintended filesystem mounts showing up in less than desirable
places. The automounter has two debugging flags: -T
and
-v
, enabling request tracing and verbose mount requests,
respectively. When the -v
flag is supplied on the
automountd command line, each request is sent to syslog. With a single
-T
on the command line, you'll get significant information
about the inner workings of the automounter:
MOUNT REQUEST: name=/var/mail map=auto.direct opts=ro,intr,nosuid path=/var/mail PUSH /etc/auto.direct POP /etc/auto.direct mapname = auto.direct, key = /var/mail (nfs,nfs) / -rw,intr,nosuid,actimeo=0 sunrise:/var/mail nfsmount: sunrise:/var/mail /var/mail rw,intr,nosuid,actimeo=0 ping sunrise OK mount sunrise:/var/mail /var/mail (rw,intr,nosuid,actimeo=0) mount sunrise:/var/mail OK MOUNT REPLY : status=0
Supply a double trace flag (-T -T
) and you get detail
targeted at those with source code.
Killing the automounter and restarting it with a plethora of flags isn't conducive to rapid problem solving or user satisfaction. If you have a variety of volumes mounted, it may be hard to kill the automounter gracefully and have it start up again without side effects. Fortunately, there's a back door into setting the debug options. When the automounter sees a lookup request for a file name starting with an equal sign (=), it parses the name to see if it matches one of the following settings:
=v Toggle -v on/off =0 Turn off all tracing =1 Set -T (simple trace) =2 Set -T -T (advanced trace)
This trick only works for indirect mount points, where the
automounter would match the file name component to a map key. For
example, turn on verbose mount request logging and simple tracing by
performing two "ls
" operations on the appropriate files in
/net (a default indirect map):
# ls /net/=v # ls /net/=1
Only the superuser can toggle the debugging flags. If you're using /home as an indirect automounter mount point, you could just as easily have done:
# ls /home/=v /home/=1
What kind of problems are you likely to uncover? One of the more typical misbehaviors stems from the * wildcard key being substituted into the server specification, producing an illegal server name or an invalid server path name. For example, you use a default entry in the auto.home map like:
* &:/export/home
If a user accidentally uses /home/fred instead of /home/bigboy/fred, an attempt is made to mount fred:/export/home. You may have a machine with that name, producing unexpected mount results, or the automounter may fail trying to resolve the host name.
If you see messages complaining "Can't remount," check for overlapping map entries. You may be mounting one filesystem on top of part of another, making it impossible to unmount the the underlying directory after it has reached quiescence. Check hierarchical maps and those with variables or wildcards carefully to make sure the name spaces of each map remain independent. This error is also common with subdirectory mounts (see SunWorld Online's May SysAdmin column) where the actual mount point changes depending upon the order in which the subdirectories are accessed.
|
|
|
|
Auto Pileups: Unsticking the automounter
By far the most common problem is when the automounter gets stuck
waiting for a server that has crashed, or when the daemon is unable to
resolve a mount point due to name server, network loading, or map
syntax problems. The automounter emulates an NFS server, so when it
gets stuck or stops processing requests, the user-visible symptoms are
the same as for a crashed NFS server: "server not responding" messages
in the console window and a general lock up of processes accessing the
failed server. Instead of seeing a server name, however, you'll see
the process id of the automounter daemon:
pid203@/home not responding, still trying
The automounter is single threaded, so it only processes one mount request at a time. Subsequent requests for the automounter's services are routinely ignored until the current mount operation completes. If the mount in progress makes no progress, other processes talking to the automounter have their requests dropped, just as an overloaded server drops incoming NFS RPC packets.
From the point of view of the process that tickled the automounter, an NFS server (the one running on the local machine) has crashed, so the standard NFS warning is displayed. It may be having trouble talking to a server that crashed, or it may be encountering NIS/DNS problems that lead to high latency in host name resolution, routing problems that prevent mount requests from reaching the server, or other variations of lossy networks.
You can immediately tell what map is causing the problem. But if you want to see the key in the map that is leading to the hang, you'll need to turn on debugging before you coax the automounter into a compromising position. It's possible that the automounter has gotten wedged because it can't get enough CPU to process the map files and send off an NFS mount RPC. Remember that the automounter daemon is a user-level process, although it spends most of its life making system calls. Be sure your system isn't suffering from some runaway real-time processes or sources of high-level interrupts, like serial line input, that could prevent a user-level process from completing a series of system calls. In most cases, however, the automounter pileups are caused by an unresponsive server or name service problems that prevent the daemon from converting map entries into valid server IP address and path name specifications.
Early versions of the automounter could be lulled to sleep faster
than some infant children. In response to bugs, and the need for
non-Sun automounter functionality, Jan-Simon Pendry wrote the publicly
available amd
. A code archive is available for ftp from
ftp.acl.lanl.gov:/pub/amd,
and an
archive of amd
notes, questions and bug fixes is available at the University
of Maryland. One of the nicer features of amd
is that
of a keep-alive mount point: The automounter pings the currently
mounted server from a replicated set, until it finds that it no longer
gets a response. When the server ostensibly has died, amd
renames the old mount point so that future references cause a new mount
(of another server) to be completed. amd
tries its best
to never block a user-level process.
Does this make amd
better than the Solaris
automounter? The keep-alive feature helps when you need to start a new
series of processes that rely on a dead mount point. The new requests
cause new mounts to be completed on the new staging area, while the old
mount point (and processes waiting on it) spin quietly in the
background. Neither amd
nor the automounter help you if
you need to recover processes waiting on a crashed server. Those
processes go into disk wait and can't be cleaned up easily. To get an
inventory of processes using a particular mount point, use the
lsof
utility to locate processes with open files on the
automounted directory hierarchy. (see SunWorld Online's
September 1995 SysAdmin column).
Averting the evil eyes: Using stronger authentication
Sometimes the automounter works too well. A user has root access on
her own machine, so she creates another login (for you) on her machine
with a trivial password. A quick "su
" to root, and then
an "su fred
", and voila! -- she's now browsing your home
directory, reading your mail and generally enjoying the privileges of
membership in your e-life. How do you prevent users from accessing
others' files when they can basically do what they want as root? Gain
a stronger measure of control using Secure NFS, Sun's layering of
strong authentication on top of the NFS V2 and V3 protocols.
Secure NFS uses Secure RPC, the same mechanism used by NIS+ to authenticate client requests. Secure RPC uses a verifier attached to the credentials for each RPC request. The verifier includes the user's credentials and a timestamp to prevent request replays. The verifier is encrypted with a session key unique to each user-host pair. How do the NFS server and user's client determine the session key? A large random value is chosen as the key and is passed from the client to the server using public key encryption. The server and user keys are maintained in an NIS or NIS+ table. The public key is left is plaintext, and the private key is encrypted using the user's (or root's in the case of a server) login password.
To complete the key exchange, the user needs to decrypt her private
key using her login password. If the correct login password isn't
provided, the secret key can't be decrypted, and the user cannot be
authenticated to the server. Nosy users that create duplicate, fake
logins can't gain access to the spoofed users' NFS-mounted files
because they can't decrypt the user's private key. See the man pages
for publickey
, newkey
, and chkey
for more information on generating the public key pairs, building the
NIS and NIS+ tables, and setting up server keys.
Turn on Secure NFS using the secure
mount option in
/etc/vfstab or in the automounter map and by sharing (or
exporting) the filesystem with the secure option as well. On the
client side, the secure flag tells the kernel RPC code that it needs to
supply encrypted verifiers and credentials with each request. On the
server side, the option ensures that credentials are verified before
allowing access to the file. User credentials that have no verifier or
can't be decrypted properly are treated as null credentials, giving the
user the same permissions as nobody
.
Here are some other warnings and advisories to help you ward off evil prowling eyes on your network:
kerb
mount and share option. Solaris includes a Kerberos
client component, but there's no key and ticket server provided in the
default distribution. Kerberos-authenticated NFS is useful if you're
already using Kerberos for other services like DCE, otherwise you may
find Secure NFS easier to install and maintain. For more information
on Kerberos, visit
its home at MIT,
or see it described formally in
RFC 1510 and a
FAQ.
hosts.equiv
or
.rhosts
files to allow transparent (password-free) login,
you'll need to have users execute keylogin
to provide
public key information to the key service daemon on the remote host.
snoop
and
friends, so data can be exposed indirectly while the targeted user is
accessing his or her files using NFS.
Secure NFS is only part of a security strategy, not a panacea. It
slows down the most obvious su
-to-someone-important sneak
through, but it doesn't prevent more onerous systematic attacks. NFS
was designed to run on a local area network, with some measure of
internal security -- that is, you have some level of trust built up for
the hosts and users on that network. When you make NFS filesystems
available to unknown hosts or users, you need to think carefully about
the value of the data you're exposing or opening up for damage.
Why then would anyone use NFS over the Internet? The TCP/IP protocol for Web page exchange, HTTP, is fairly inefficient at moving multiple files, opening a new TCP connection for each transfer. It's also hard to browse through directories of files with a protocol designed for pure file transfer. There are times when you'd like to bring together the transparent access capabilities of HTTP with the file transfer and browsing efficiency of a distributed filesystem, which is what Sun has done with WebNFS.
Take a quick glance at the WebNFS information on Sun's Web server.
WebNFS provides a general purpose file transfer mechanism for the Internet. It differs from the generic NFS protocol in a two subtle ways. First, the mount protocol is eliminated, so only NFS requests pass between client and server. To get an initial file handle (needed for all lookup, directory reading and file operations), the client sends a request with a zero-length file handle, which the WebNFS server interprets as a request for the handle to be filled in. Normally, the NFS mount protocol returns the file handle of the filesystem's top-level exported (shared) directory to the client. The second difference is that clients can ask for multiple pathname components to be resolved in a single call, instead of one at a time. This multiple component lookup reduces the number of NFS RPC calls needed to start moving data.
WebNFS isn't restricted to making data available to the public.
Running NFS through a firewall -- internal or external -- is difficult
because the mount protocol uses UDP to reach the portmapper. You need
to let through UDP packets to the portmapper, UDP traffic to the
rpc.mountd
daemon, and UDP or TCP requests to the NFS
server port. Creating a firewall configuration to pass the required
packets through may leave you with a significantly weaker perimeter.
WebNFS eliminates the mount protocol and can restrict traffic to NFS
over TCP, letting you connect clients and servers through a firewall
with only minimal port exposures.
Is WebNFS a good idea? Again, it's not a cure-all for data sharing
over insecure networks, but it's an improvement over existing
protocols. The October 11, 1995 edition of the
Sharing without caring: Automating access to removable media
So far we've looked at ways of improving transparent, secure access to
filesystems, without worrying about whether the filesystems are there
or not. What happens when you want to have a client automount a
CD-ROM? You need to make the volume available for sharing after it's
been inserted and mounted. The Solaris volume manager,
vold
provides the foundation for making CD-ROMs and NFS
work together.
The volume manager uses /etc/vold.confto determine what
devices it manages and where to mount them. We'll look at the volume
manager and its configuration in more detail in a coming month.
Sitting underneath vold
is the real workhorse -- the
removable media mounting tool rmmount
. It uses its own
configuration file, /etc/rmmount.conf, to guide it through
media changes such as inserts and ejects. Add the following line to
/etc/rmmount.conf to have all CD-ROM devices automatically
shared as soon as they are mounted:
share cdrom*
Have the clients mount the server CD-ROMs using the server:/cdrom path specification, and you'll be able to automatically access each server's CD-ROM as it is loaded. A typical automounter map, auto_cdrom, for a network of CD-ROM drives looks like this:
* &:/cdrom
The auto_master entry for this indirect map is:
/cds auto_cdrom
You can't drop the network of CD-ROMs onto /cdrom on the clients, or you'll collide with the local volume manager. The maps above mount server-side CD-ROMs on /cds/fred, /cds/bigboy, and so on.
As usual, there are things to keep in mind:
fstype=hsfs
" flag to the automounter
map. The automounter assumes it's mounting a Unix filesystem (UFS)
unless you tell it otherwise.
timeo=14
".
Having gone through most variations on the NFS and automounter theme, you should be in a position to make nearly any kind of filesystem available just about anywhere. Doing the difficult under adverse conditions is de rigeur for system management. It also frees time for more important things -- like the USENIX Security Symposium. When you have your users sliding from volume to volume without your involvement, you can make your own transparent access to events much more entertaining.
|
Resources
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-07-1996/swol-07-sysadmin.html
Last modified: