Originally published in the February 1995 issue of Advanced Systems.

Review

Are you being served?

We pit three NFS servers against each other and the LADDIS benchmark.

By David Burnette & Cedric Higgins

NFS servers. Does the name conjure up enormous droning cabinets with loads of spinning disks and a rat's nest of Ethernets? If it does, the image is only half right. The NS7000/200 from Auspex Systems we tested for this issue fits this description to a tee. On the flip side is the petite, though not quite as handsome, FAServer 1400 from Network Appliance Corp (see the sidebar Advanced Systems Test Strip: NFS servers). You probably couldn't choose two more incongruous machines to pit against one another. Yet in our tests, the smaller FAServer comfortably achieves more than a third the NFS performance of the Auspex behemoth, which, when you consider the wildly different architectures, is a little surprising. Do not judge a book by its cover (see the sidebar NFS servers: Toaster or Ginsu knife?).

We also requisitioned a collection of off-the-shelf components from Sun to see how a fairly standard general server competes against engineered marvels from companies dedicated to specialized NFS performance. You may be surprised how Sun's offering fared. (Since this SPARCstation 20 visited our lab for a short time, we didn't test it as vigorously as the Auspex and Network Appliance. Therefore, we did not score it. See " 'Sweet spot' workstations," September 1994, for a review of a more lightly configured SPARCstation 20.)

No review of NFS servers would be complete without LADDIS numbers, and our testing with LADDIS yielded performance results surprisingly close to those published by the vendors, a rarity in this world of benchmark-specific compiler flags. Perhaps since LADDIS is so darn ornery, cooking the numbers is more difficult than with SPECint or SPECfp. (Ordinarily, the Advanced Systems Test Center's SPECint and SPECfp results are a little lower than the vendors'.)

Before we explain in depth what LADDIS is, we must first describe the NFS servers (and the 100 gigabytes worth of disk capacity) that hummed in our lab for the duration of this review.

David
For a machine to be an NFS server it must run Unix, right? Wrong. Network Appliance took a 50-MHz i486 motherboard, a RAID-4 subsystem, and some network cards; it wrote an operating system to control the lot, and produced an impressive black box that even the most die-hard Unix bigot would call an NFS server (the company calls its server a filer, a bit of marketese we cannot abide).

Network Appliance uses a custom implementation of RAID-4 with a StorageWorks disk array from Digital Equipment Corp. Out of seven 2-gigabyte disks, six contain data and the seventh is dedicated to parity information for the data disks. If a block of data or even an entire disk goes bad, lost data can be reconstructed from parity information; if the parity disk goes bad, the parity information can be regenerated from the data disks once the parity drive is replaced. Files are striped in 4-kilobyte chunks across the six data disks. An additional StorageWorks array can be added, for a total of 13 data disks and one for parity.

Network Appliance developed a special filesystem for its server called WAFL (Write Anywhere File Layout) that is designed to work well with NFS and the RAID-4 subsystem. As its name would suggest, WAFL has greater freedom to write data to disk than the Berkeley UFS filesystem, which is constrained to write into a fixed series of free blocks.

WAFL, on the other hand, needn't wait for the next block in the free list to appear beneath a disk head. Instead, it writes to a free block nearest a disk head. Increasing the speed of writes is important because in systems with large caches most reads tend to be satisfied out of cache, whereas writes must go to disk (or "stable storage" as the NFS protocol puts it). The faster a disk subsystem's write operations, the greater will be its overall throughput, assuming reads most often come from cache. Hence, the importance of WAFL to the FAServer's NFS performance.

The FAServer 1400 we tested contained 256 megabytes of system RAM, most of which is used as cache (the OS requires a measly 600 kilobytes or so -- remember when Unix was that small?). Our FAServer had three Ethernet ports, with four as the maximum. To further boost write performance, Network Appliance did what both Auspex and Sun also do: added an NVRAM write cache. Since this cache is "nonvolatile," it satisfies the NFS requirement that writes must go to stable storage. Network Appliance sticks 4 megabytes of NVRAM in its servers, which allows it to organize writes into stripes for transfer to disk. This, coupled with WAFL, creates a formidable network appliance dedicated to serving NFS requests.

Goliath
Auspex tackled the NFS performance problem with a combination of functionally optimized hardware and software. Gathered together in the main cabinet of the NS7000/200 are four SPARC processors, each dedicated to a specific task. A 55-MHz HyperSPARC host processor forms the core of what is in essence a SPARCstation 10 running a modified version of SunOS 4.1.3. Then 32 megabytes of system RAM (up to 256 megabytes are supported), a SCSI-2 interface, and a couple of SBus slots, all on an enhanced VME card, round out the system. Though the company has added some custom commands and altered a few SunOS daemons, SPARC binaries run unchanged and as a consequence, the Auspex box can serve as an application server in addition to an NFS server. The host processor is what administrators interact with. Worker bees in Sun hives will feel at home tending Auspex queens.

On a second VME card are two more SPARC chips, which power the network and file processors. They handle network and filesystem traffic respectively. NFS requests coming in from Ethernet or FDDI are directed to the appropriate file processor, bypassing the overlord host processor. 128 megabytes of RAM serve as cache for filesystem data (up to 256 megabytes are supported). Each such VME card supports up to six Ethernets or two FDDI nets (we tested the six-Ethernet variant). A third VME card contains the storage processor, a 68030 riding herd on six fast SCSI channels. One megabyte of NVRAM serves as an NFS write cache. Running on each of these specialized processors is Auspex's FMK (Functionally Multiprocessing Kernel). This mouthful coordinates the operation of and traffic between the network, file, and storage processors. Auspex systems currently support RAID-0 (striping) and RAID-1 (mirroring). Look for higher RAID offerings in the future. Connecting all these cards into one big happy VME family is a 55 megabyte-per-second backplane; access is from the rear.

Behind the front door of the main cabinet is one column of seven slots for SCSI devices, each in its own sheet-metal sled. The sleds have handles for easy removal, and locking mechanisms for peace of mind. Additional enclosures with 14 SCSI slots each can be attached for more storage. The system we tested comprised three cabinets with a total of 28 1.75-gigabyte disk drives. We were awash in spindles. CD-ROM and 8mm tape drives were thrown in for good measure.

This concert of computing power makes the NS7000/200 the mainframe of the NFS servers we tested (the NS7000/500 is an even bigger monster). Whereas the FAServer huddled in a rackmount case next to its RAID subsystem, the Auspex's trio of thigh-high cabinets menaced the surrounding area with their obsidian aura.

Off the shelf
The Sun system we tested was a SPARCstation 20 hooked up to a big box of 18 1-gigabyte disks. The box in our case was a SPARC Storage Array connected to the CPU by a 25 megabyte-per-second fiber-optic umbilical. Sun's SOC (serial optical cable) fiber-channel interface is a pleasant alternative to bulky SCSI trunk lines. The storage array's cabinet is similar to a SPARCserver 1000 housing, making Sun's offering a diminutive competitor to the Cerberus-like Auspex.

Stuffed into the system case were two 60-MHz SuperSPARC+ CPUs, two quad-Ethernet SBus cards (making a total of nine net connections), 256 megabytes of system RAM, a 1-gigabyte OS disk, and 4 megabytes of NVRAM in the form of two 2-megabyte NVSIMMs. Coordinating this densely packed assemblage was Solaris 2.4. (Our initial impression of Sun's latest attempt at an OS is a thumbs-up.)

Using Sun's Online:DiskSuite, we striped three partitions across the 18 disks in the SPARC Storage Array. ODS also supports mirroring for reliability. As the Sun system was a late entrant into our NFS server derby, we did not have as much time as we would have liked to use ODS. Consequently, we did not have an opportunity to test mirrored partitions within the array (see the sidebar A Lego approach to NFS servers).

Note that all three of these NFS servers employ large amounts of filesystem cache (at least 128 megabytes), some amount of NVRAM write cache (at least one megabyte), a good deal of disk storage (at least 14 gigabytes), and several Ethernets (at least three). With the architectural groundwork laid, let's move onto some performance metrics.

Lies, damn lies, and benchmarks
Developed by SPEC, the consortium that brought forth SPECint and SPECfp, LADDIS is an NFS server benchmark. (The name comes from the first letters of the companies involved in its creation: Legato Systems, Auspex Systems, Data General, Digital Equipment, Interphase, and Sun Microsystems.) Running LADDIS is like going to the DMV. You spend hours waiting around and all you receive in the end is a piece of paper, but instead of a hideous picture of yourself, LADDIS presents you with a graph.

Before LADDIS, there was nhfsstone, which suffers from deficiencies that SPEC's offering overcomes. Nhfsstone relies on NFS client kernel implementations; LADDIS generates all its NFS requests internally and does not rely on the client's implementation of NFS. Nhfsstone uses one NFS client to simulate a load on the server; official run rules for LADDIS mandate that two clients be used per network. Last but not least, nhfsstone lacks a standardized approach to running the benchmark and reporting its results; SPEC's run rules see that vendors run LADDIS and report results in a more-or-less consistent fashion. (In other words, don't trust anyone quoting nhfsstone numbers.)

For our LADDIS tests (a SPEC representative said Advanced Systems is the first nonvendor to publish its own LADDIS numbers), we herded together 12 client workstations and jacked them into six subnets -- SPEC mandates two per net. The Auspex box had six Ethernet ports, and though the Sun box bristled with nine, we could test but six. The runt of the litter, Network Appliance's FAServer, had three network ports.

The operation of LADDIS is fairly simple. One system on the network is designated the prime client and controls the deployment of load-generating processes to the clients. The prime client can be a load generator, too. These load-generating processes fire off a series of NFS requests to the server under test. About half of LADDIS's NFS requests are file name and attribute requests, a third are I/O operations (reads and writes), and the remaining sixth is spread among other operations (removes, creates, fiddling with directories). This mix comes from studies conducted at Sun and attempts to simulate a software-development environment. (For a more detailed discussion of LADDIS send an e-mail message to laddis @advanced.com.)

A control file on the prime client instructs it to slowly increase the NFS load on the server under test. The performance of the server under these increasing loads determines the LADDIS graph, which plots NFS throughput versus NFS response time (See charts LADDIS results for NFS servers and CDSI LADDIS results). You'll notice that all the servers truck along smoothly below the 20-millisecond level before shooting skyward at some point. The horizontal line at 50 milliseconds is the SPEC-mandated cut-off point. The so-called single figure of merit is the highest throughput a server can attain below this line. In all our tests, the Auspex NS7000/200 served as the prime client. Before you cry foul, remember that its host processor is functionally distinct from its underlying NFS subsystems. Even at high NFS loads, the host processor is idle, able to do work.

It is good for a server to achieve as level and as low a curve as possible, as well as to extend far to the right. This means that not only is the server satisfying client NFS requests within a short period of time -- on the order of 10 or 20 milliseconds -- but that it can meet this criteria under high NFS loads to boot. All three are good performers in the first regard. In the second, the Network Appliance FAServer 1400 trails along at a little under half the throughput of the Auspex. The SPARCstation 20 gallops up near the Auspex but cannot close the distance before entering the stratosphere. Outpacing the others, the Auspex stakes a claim to the far-right territory.

The single figures of merit we obtained for these servers were 618, 1204, and 1481 NFS operations per second for Network Appliance, Sun, and Auspex respectively. (For more benchmark results send an e-mail message to nfs95@ advanced.com.)

The LADDIS performance results are a bit surprising given the architectures of the three systems. The Auspex and Sun machines boast formidable levels of engineering, but the FAServer is a essentially a PC running an NFS kernel. That it garners nearly half the performance of its mightier rivals is a pleasant surprise. It's scary to think what Network Appliance could do with a more powerful processor and additional Ethernet cards (we tested six nets on the Auspex and Sun, but a mere three on the FAServer).

When a disk goes bad
Having LADDIS around was a convenient way to generate heavy NFS loads on the servers and then do bad things to their disk subsystems, like simulating a disk failure (kids, don't try this at home).

The Auspex system supports RAID-0 and 1, though RAID-0 can hardly be considered "redundant" since it just stripes partitions across disk drives. RAID-1 offers true redundancy by mirroring partitions on dual sets of drives. For our fail-over tests, we set up three mirrored partitions of about 2 gigabytes each and ran LADDIS against them. When LADDIS was generating NFS loads of about 600 operations per second, we pulled out a disk from one of the racks in an expansion cabinet. An Auspex utility called ax_perfmon displays six screens of system data, several of which show disk activity. This utility indicated that after we pulled the disk, all disk operations ceased. Console messages informed us that various errors were detected on the SCSI bus. About a minute later, however, the storage processor brought the disks back on line, minus a stripe -- the one from which we pulled the disk. The LADDIS run continued and did not fail (a testament in its own right).

We mirrored three partitions, each composed of two identical striped partitions of several disks. So, even with one striped partition out of commission, the system continued to satisfy NFS requests after a brief cessation. Our measurements indicate that NFS response time more than doubled during this phase of the failure (as a result of the "failure," write cache was disabled). If we had not mirrored the partitions and were relying on only striped partitions (RAID-0), we would have been royally hosed. This indicates the frailty of RAID-0 and why it's stupid to rely on it. Folks who value performance and capacity over reliability may beg to differ. If one disk out of a striped set goes bad, the entire stripe and all its data are lost. It boils down to how valuable your data and how inconvenient downtime is to you.

With the damage done, we decided to replace the failed drive by sliding it back into its slot and telling the Auspex system that it was there with its custom command-line utilities. It automatically began copying data back onto the new partition from the remaining stripe. During this phase, NFS performance declined 24 percent (write cache was re-enabled). We give Auspex high marks for its ability to handle a disk blowout gracefully and with such a small amount of administrator overhead.

With RAID-4 under its belt, the Network Appliance FAServer does not have to resort to mirroring to survive a disk failure. As with the Auspex, we started a LADDIS run against the FAServer and then pulled one of its data disks after it was jogging along in the hundreds of operations per second. Our tests showed that NFS performance declined by 4 percent.

We replaced the failed drive by sliding it back into its slot in the RAID subsystem and rebooting the FAServer. (There is also a RAID add command for use in the hot-swap models.) A minute later (yes, it reboots that fast), the system began to reconstruct the lost data by generating it from parity information. NFS performance in this mode, too, was degraded and declined by 13 percent. The reconstruction process lasted about 280 minutes under an NFS load. All the while, our LADDIS run continued.

As we mentioned previously, lack of time prevented us from testing mirrored partitions on the Sun system. We are itching to fiddle with Sun's Online: DiskSuite and when we do, we'll let you know how it fails.

Documentation and support
Our interactions with Auspex and Network Appliance (and Sun) support staffs were all good. The engineers reeked of competence and we can only assume that they help normal customers with identical aplomb.

An Auspex server comes with 90 days of free support. Auspex's support is available 24 hours each day, seven days a week.

Network Appliance includes a one-year warranty on parts and labor with next-business-day on-site support. Telephone support is available through a toll-free number week days from 8 am to 6 pm, PST. Extended service and support programs are available (see the sidebar NFS servers in the real world).

The vendors' nonhuman help is good, too. Auspex documentation is a three-volume set of three-ring binders; Network Appliance's is a small spiral-bound booklet. Each set is well written and came to our aid in times of minor crisis. Despite its size, the small Network Appliance manual is completely adequate. There simply isn't much you have to do to a FAServer once it's configured (which takes less than five minutes). The Auspex system, on the other hand, has many knobs and levers to manipulate (setting up mirrored partitions, performance monitoring, fiddling with striped filesystems) that its manuals explain well. The Sun system, being a Sun system, was familiar to us and gave us no grief.

Toaster or titan?
There is no easy answer to deciding which NFS server is best. Cobbling one together from off-the-shelf components offers the appeal of simplicity and familiarity. However, when the SPARCstation 20 we tested was under heavy NFS loads, its response times within an xterm session were abysmal: It required about 15 seconds to respond to a date command. Want to run FrameMaker? Good luck. If you dedicate such a system to handling heavy NFS loads, forget about also using it as an application server. Additional NFS processing power can be achieved with more disks and CPUs. As tested, our SPARCstation 20 with an 18-gigabyte RAID costs $79,185.

Even at the far end of the NFS spectrum, the Auspex NS7000/200's SPARCstation 10 host processor is available to satisfy application requests. In some environments, this could be a dandy asset indeed. The Auspex system scales well, also. In our tests, adding a bank of 10 more disks increased the system's maximum NFS performance by about 400 operations per second. This wonderful scalability and fault-resilience comes at a price, however: The NS7000/200 we tested lists for $188,120.

Network Appliance's answer to the scalability question is to add more servers. Company reps spoke of customers that chain several FAServers together on an FDDI ring and connect them via an FDDI switch to a bank of Ethernets. Need more NFS storage? Add another FAServer to the FDDI ring. Network Appliance boxes are like mushrooms -- you put them in a dark room where they live happily, visited occasionally by sysadmin gardeners. Some folks may balk at its non-Unix OS, but we found its administration simple and its ease of use extraordinary. It is well named as an NFS appliance with an appealing list price: $50,635.

The above pricing information is list, and prices are flexible. Other considerations are important, too (see the sidebar Five cases where NFS doesn't fly): Data availability. Mirroring is easy on both the Auspex and Sun systems, as is recovery, but it halves your available disk space. The tariff for Network Appliance's RAID-4 protection is at most a seventh of total disk space. Expandability. Centralizing NFS storage in one large Auspex or Sun system offers administrative advantages at the expense of system obesity. Network Appliance's approach requires the installation of distinct functional units to increase NFS capacity. The downside is that the administrative burden increases as the FAServers breed islands of data. Performance. Consider the number of users you want each NFS server to support. If you have hundreds of users sharing the same data, the monolithic-server approach may be for you. If sprightly FAServers can satisfy your users, their ease of use and administration may be a more welcome addition to an admin staff's workload.

About the author
David Burnette (david.burnette@advanced.com) is a technical editor at Advanced Systems. Cedric Higgins (cedric.higgins@advanced.com) is manager of the magazine's ASTC. For a more detailed discussion of LADDIS, send an e-mail message to laddis@advanced.com. To learn more about the ASTC's LADDIS results, send an e-mail message to nfs95@advanced.com.


[Back to 
story]

Advanced Systems Test Strip: NFS servers

FAServer 1400
Network Appliance Corp.
295 N. Bernardo Ave.
Mountain View, CA 94043
415-428-5100
415-428-5151 fax
info@netapp.com

Pricing $51,635.
Summary Elegantly simple architecture, easy to administer, and deceptively abundant throughput, all at a nice price. It is the turnkey solution to the NFS server problem.

Features (25%)
Documentation (10%)
Administration (20%)
Performance (30%)
Compatibility (15%)

Overall rating 8.8

NS7000/200
Auspex Systems
5200 Great America Pkwy.
Santa Clara, CA 95054
408-986-2000
408-986-2020 fax
http://www.auspex.com

Pricing $188,120
Summary As the mainframe of NFS servers, the NS7000/200 offers throughput galore, a familiar SunOS environment, and good expandability, but all at a high price compared to generic servers.

Features (25%)
Documentation (10%)
Administration (20%)
Performance (30%)
Compatibility (15%)

Overall rating 8.6


How test strips work: Categories are judged compared to other products in their class. We judge different products on different categories as needed. Features evaluates capacity, expandability, reliability, and availability features. Documentation looks at the quality and completeness of paper and on-line documentation. Administration gives credit for tools that aid system administration. Performance summarizes tests of various comparative performance metrics. Compatibility includes compliance with relevant standards, ease of porting from other Unix systems. (Other products reviewed in this issue are rated on a similar system, with categories adjusted to the nature and use of each product class.) Weightings are based on reader surveys and expert knowledge. Total of extensions is divided by 50 and truncated to one decimal place to yield an overall rating on a scale of one to ten. Adjust the weightings to customize the Test Strip to your own needs.

[Back to story]


[Back to story]

NFS servers: Toaster or Ginsu knife?

NFS servers are falling into two broad classes:the appliance, which does nothing but handle NFS requests, and the general-purpose server (Ginsu knife) that can provide print services, PC protocols, and even a midsized database in conjunction with NFS. Your best model will depend upon how well you can separate services, how much administrative overhead each approach introduces, and how long you expect to keep the hardware. For the purposes of this discussion, an appliance will refer to a dedicated NFS server such as an Auspex or a Network Appliance FAServer. While there are other soft issues such as your ability to forge a relationship with the vendor, here are some questions to help formulate an opinion in the toaster versus Ginsu knife debate:

[Back to story]


[Back to story]

A Lego approach to building an NFS server

We enter the micro-era of the do-it-yourself NFS server. As we concluded our tests on the Auspex, Network Appliance, and Sun NFS servers, we received a call from nearby Central Design Systems, a local value-added reseller. They had heard of our review and boasted of a cheap, fast, and safe NFS server assembled from off-the-shelf components. With some skepticism, a little curiosity, and our LADDIS tape in hand we drove 45 minutes south into the heart of Silicon Valley.

CDSI specializes in products geared for system administrators (look for a review of the company's LicenseTrack software in a future issue). The company's Auspex and Network Appliance killer consisted of a SPARCstation 20 with two 60-MHz CPUs, 256 megabytes of RAM, 4 megabytes of NVRAM, a fast-SCSI card, and a Baydel 10-gigabyte RAID-3 subsystem.

In our LADDIS testing back at the ASTC, we found I/O plays a huge part in an NFS server's overall performance. RAID-3 stripes data across multiple drives. However, rather than striping in 4-kilobyte chunks as in Network Appliance's RAID-4 implementation, RAID-3 stripes at the byte level -- the first byte of data is written on the first drive, the second is written on the next, and so on. A dedicated parity drive lends fault tolerance. RAID-3 is well suited for sequential reads and writes (as in CAD/CAM or graphics environments), and is not, we found, ideal for the random load LADDIS generates.

CDSI claimed its offering would achieve half the NFS performance of Network Appliance's FAServer 1400. Our LADDIS tests proved CDSI true to its word. We then added another Baydel subsystem to the configuration, daisy chaining them together on one fast-SCSI bus, and ran LADDIS again. The performance nearly doubled, showing that increasing the number of spindles is one way to boost a server's NFS performance.

We took a quick peek at the company's other NFS product and were struck with a sense of deja vu. CDSI bundles a SPARCstation 20 and an 18-gigabyte SPARC Storage Array -- the same collection of Sun gear we pitted against Auspex and Network Appliance at the ASTC.

Packages available from CDSI start with a 10-gigabyte Baydel OES/R RAID-3 subsystem at a list price of $65,495 or $94,495 for a 20-gigabyte Baydel RAID-3 subsystem. Both include a SPARCstation 20/612, with 256 megabytes of RAM and 4 megabytes of NVRAM. CDSI also sells the SPARCstation 20 with the SPARC Storage Array configuration we reviewed for $78,995. Contact CDSI at 408-383-9399 (408-383-9395 fax) or by e-mail at sales@cdsi.com. -- Cedric Higgins

[Back to story]


[Back to story]

NFS servers in the real world

SGS-Thomson Microelectronics designs and manufactures semiconductors. At our site, a 60-gigabyte Auspex NFS fileserver delivers many functions to our more than 400 users, 250 workstations, and 18 general-purpose servers, including file serving, license serving, backups, restores, routing, and NIS. We chose the Auspex because it allowed us to consolidate many vital functions in one box.

Our 20-gigabyte Network Appliance, on the other hand, serves large engineering files to CAD engineers. It serves files only, which suits the engineers and system administrators just fine because they needed high NFS performance at a low price.

Auspex offers exemplary support. Network Appliance, on the other hand, needs to work harder in this area. -- Gary Smith (smithg@charon.stm.com) is a senior Unix engineer responsible for systems and network management and a member of the Advanced Systems Reader Advisory Board.

[Back to story]


[Back to 
story]

Five cases where NFS doesn't fly

NFS is an excellent general-purpose mechanism for sharing files, but it isn't suitable for everything. The very things that make NFS popular and useful mean that there are a few things it doesn't do very well. For example, don't use NFS in environments that...

Are sensitive to network latency. This is true when using some types of wide-area networks and especially true for real-time work. There are no mechanisms in the NFS protocols that enable the client to request a specific response-time guarantee from the server, even when the network media (such as ATM) may be able to guarantee available bandwidth.

Require mandatory file locking. NFS's lock protocol does not provide mandatory file locking. (Only advisory locking is provided because many host operating systems do not offer mandatory file locking.) This arises most commonly when trying to implement client/server databases without such support in the database itself. For example, don't run multiple instances of single-system PC databases such as dBASE IV for PCs via shared NFS partitions.

Require very fine-grained access control. The current NFS 2 protocol defines Unix-like access mode bits, which don't have sufficient resolution for some applications. The NFS 3 protocol removes this limitation.

Require shared access to physical devices, rather than to disk files. Because NFS is specifically designed to be independent of its host operating system, it does not support the use of arbitrary remote devices, which are inherently host-dependent.

Depend upon filesystem semantics that are peculiar to a specific operating system. NFS offers only the notion of a contiguous stream of bytes; complex multifile organizations are not implemented by the protocol. For example, most implementations of indexed sequential files depend upon specific host-system support to specify data layout and/or organization. Record Manager files offered by OpenVMS fall into this category. -- Brian Wong (brian.wong@advanced.com) is a staff engineer at SMCC and author of the book Configuration and Capacity Planning for Solaris Servers.

[Back to story]


[Copyright 1995 Web Publishing Inc.]

If you have problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/asm-02-1995/asm-02-lead.rev.html.
Last updated: 1 February 1995.