Click on our Sponsors to help Support SunWorld

Easing administrative duties with CacheFS and AutoClient

How to manage client workstations remotely while delivering good performance

By Brian Wong

SunWorld
January  1996
[Next story]
[Table of Contents]
[Search]
Subscribe to SunWorld, it's free!

Abstract
Want to centrally administer your site's workstations without the performance hit inherent in diskless clients? The Cache File System may be what you need. CacheFS improves access to slow or heavily used file systems by making use of an on-disk cache; it is particularly useful for reducing network and NFS server loads. On top of CacheFS, Solstice AutoClient tries to combine the easier management of diskless workstations with the performance of standalone configurations. Here's an in-depth look at how CacheFS and AutoClient work.(1,800 words)


Mail this
article to
a friend
Administrators like the idea of centrally administering their client workstations. Unfortunately, the physical resources of the client workstation (such as the disk, framebuffer and the like) are managed by the local operating system, and are unique to each client workstation. Centralizing on a server the management of physical client resources means somehow transferring these resources away from the client while still retaining localized usage.

In the past, administrators employed the diskless client to achieve precisely this division of labor. The client's administrative files (along with all of the client's other files) resided on an NFS server, and the client made use of them by mounting them via traditional NFS methods. Unfortunately, the completely diskless model also placed the client's swap space on the server. While it's certainly easier to administer diskless clients than traditional standalone clients, the location of so many critical resources across the network results in substantial NFS server load, combined with networking congestion. Thus, the diskless model proved unacceptable in most circumstances

The performance advantage of inexpensive local disks has made the diskless model obsolete, but the problem of administration remains. Another approach to this architectural problem involves placing the heavily used resources such as swap space and frequently accessed system files on the local disk, yet permitting centralized (remote) administration. Solstice AutoClient uses the Cache File System (CacheFS) in Solaris to implement this improved model.

Design
AutoClient mimics the basic diskless model by storing the authoritative copies of the operating system and all of its administrative resources on a centralized file server. In fact, the client actually boots as a diskless client, downloading the kernel in the traditional fashion. These attributes facilitate centralized control. CacheFS, used to store consistent local copies of the NFS file system image on an inexpensive local disk, shifts part of the servers typical burden to the local machine. It arranges to obtain required data via NFS from the server, stores it in a local disk cache, and keeps it consistent with the server image. By caching the files on its local disk, the client achieves performance comparable to that of a standalone workstation.

The AutoClient, when installed on the desktop machine, recognizes that the client must populate a CacheFS cache during the system initialization phase and causes the crucial root and /usr file systems to be cached. Once populated, the client becomes essentially independent of its boot server, yet the diskless nature of the boot process and the centralized location of the authoritative administrative files means that the client workstation is literally a field-replaceable unit. In the event of a failure, a new system can replace the failed unit -- the administrator need only change the network Media Access Control (MAC) address to match the replacement system's identity.


Advertisements

CacheFS
CacheFS is a buffering mechanism placed in front of a traditional file system, introducing the notion of a front file system and a back file system. The back file system serves as the authoritative source of the data, while the front file system acts as a specially managed cache. Typical back file systems include NFS and HSFS (the High Sierra CD-ROM file system), but any other type of file system can also be cached since the back file system is completely unaware that it is being "fronted." The Unix File System (UFS) serves as the front-end file system, since that is the only type of file system that manages local disks.

The CacheFS cache is a regular directory located in the front file system, specially initialized by the cfsadmin(1m) command. Once the cache is set up, it is activated by mounting the cache in front of the source file system, such as:

# mount -F cachefs -o backfstype=nfs,cachedir=/cache/project1,backpath=/project server:/proj1

which mounts server:/proj1 on the local /project directory and then sets up a cache in front of it.

Once the file system is mounted, the front file system (UFS) uses Solaris's mount table to intercept access to the back file system (NFS in this example). The CacheFS compares each access to the state of the local cache, and forwards data from local cache if possible. If not, the CacheFS issues a request to the back file system and provides the data to the original requester; it also copies the data into its local cache for future reference. Writes to the CacheFS are handled by writing to both the front and back file systems. (Alternatively, the administrator can request that the data be flushed from the cache upon write.) CacheFS maintains consistency by comparing the back file's modification date to the cache. If the back file system has been modified -- by a user on the server or via another NFS mount, for example -- the data is flushed from the cache and a fresh copy is obtained from the back file system.

The CacheFS warrants a couple of special considerations. First is the CacheFS's behavior for writes. When the client writes to the file system, it must write in both the front and back file systems to ensure consistency. Of course, this requires more work -- resulting in lower performance -- than just writing to the back (NFS) file system. When someone else modifies a file in the back file system in some other way, all of the client's associated cache blocks are invalidated. As a result, CacheFS is most useful for read-mostly file systems. Fortunately, this matches the profile of the root and /usr partitions. There is very little modified data in the root partition, and /usr can be mounted read-only. On the other hand, mounting the e-mail spool partition (/var/mail) isn't a very effective use of the CacheFS, since the contents of mail change frequently.

The other thing to remember when using CacheFS is that the local disk cache is only used when buffered data is actually referenced from the file system. As with other I/O subsystems, clients with a lot of memory sometimes appear to use the CacheFS less efficiently than smaller clients. This is because clients with more memory are able to cache more of the file system in memory, resulting in lower reference rates to the CacheFS. The lower reference rate is a good sign, but it is usually accompanied by a lower cache hit rate (reported by cachefsstat(1m)).

Configuration guidelines
The size of the front file system's cache obviously depends on how much of the back file system needs to be cached in order to avoid network traffic. In practice, the size of the cache can be surprisingly small. When using Solstice AutoClient, only about 30 megabytes of disk space is required in the cache to completely avoid references to the root and /usr file systems during an AutoClient boot. In contrast, the complete Solaris 2.4 distribution occupies 320 or so megabytes.

The small required cache makes it practical to use the smallest available disk drives -- 100 megabytes is plenty, unless the client needs more than 70 megabytes of local swap space. Now that the smallest disk available in new systems is typically more than 500 megabytes, essentially any client is a realistic candidate. The /usr file system, because it is not customized, can be shared among all of the clients, saving both disk storage space and disk accesses during client population. Solaris 2 root partitions typically require about 25 megabytes per client. Each client has an /opt partition that varies considerably in size.

Many users suspect AutoClient requires large, high-performance servers to be practical, but this is far from true. During development, the AutoClient team used a SPARCstation IPX to serve 24 SPARCstation 2 and SPARCstation 10 clients, with satisfactory results.

There are a number of reasons for the surprisingly light server requirements. First, the nature of the CacheFS is such that the most work-intensive parts of the "diskless" boot process (actually transferring the binaries) are absorbed by the local disk cache, leaving mostly simple attribute operations. Next, the diskless boot process is not especially intensive when compared to the capabilities of modern NFS servers. A completely diskless boot (that is, a boot not using CacheFS or AutoClient) generates a load of approximately 40 NFSops per second. Even relatively old platforms such as the SPARCstation 10 Model 41 are capable of delivering more than 660 NFSops per second (even with Solaris 2.2!), permitting simultaneous boot of 16 clients without saturating the server. Larger or newer platforms such as the SPARCcenter 2000E or Ultra-1/170 can handle a few thousand NFSops per second -- enough to simultaneously boot an entire wing of a large building after a power failure. AutoClient boots require even less effort, since once the cache is populated after the first AutoClient boot, the server has relatively little work to do, even during a boot. Unless your installation suffers from extremely frequent power failures, configuring a server as an AutoClient boot system is trivial -- all that's required is sufficient disk space and plenty of network bandwidth. The typical considerations of disk utilization, processor power, and memory size simply aren't at issue.

The primary problem with AutoClient systems is that the boot server becomes a single point of failure. Even though the client systems eventually copy the operating system onto their local disk caches, the CacheFS subsystem needs to refer to the server occasionally to ensure that the caches remain consistent. Should the server or network fail, the client will eventually come to a halt as the attribute operations cease to be serviced. The solution to this problem is a highly-available NFS server, which would permit essentially uninterrupted operation.

Summary
The CacheFS technology provides a transparent mechanism for accelerating NFS data access while simultaneously reducing the NFS-related load on both networks and NFS servers. In addition, the CacheFS makes possible further administrative efficiencies by serving as the basis for the Solstice AutoClient feature, which combines the centralized administrative qualities of the diskless client model and the performance of locally administrated disk-based systems.


Click on our Sponsors to help Support SunWorld


Resources


About the author
Brian Wong ( brian.wong@sunworld.com) is a staff engineer in the SMCC Server Products Group. Reach Brian at brian.wong@sunworld.com.

What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough
 
 
 
    

SunWorld
[Table of Contents]
Subscribe to SunWorld, it's free!
[Search]
Feedback
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-01-1996/swol-01-cachefs.html
Last modified: