|
What WebNFS means to youIt's fast, easy on servers, and available to developers. Does WebNFS matter? |
After more than a decade of use in some 10 million computers around the world, the Network File System (NFS) is getting a new lease on life as WebNFS. Engineers at Sun have been working on this implementation based upon the new standard NFS 3.0 released in 1995. This new technology may mean a whole new structure for the core protocols used by the general Internet populace. WebNFS reduces some of the inefficiencies in data transfer, and scales well for the Internet. We look at this new technology, how it is implemented, and how it can help you. (1,800 words)
Mail this article to a friend |
NFS escaped from Sun Microsystems labs in the early 1980s, and soon was accepted as the method for sharing files on Local Area Networks (LANs). This was due to Sun's sway as the leading engineering workstation manufacturer, and the slick way Sun used NFS to make workstations function sans internal disk. In the Unix world, NFS was a towering hit.
(Perhaps this explains why Unix users are so blasé about the Network Computer. Threaten to take away PC users' hard disks and they act as if you want to slice off a vital anatomical piece. Thanks to NFS, many Unix users have lived potent, happy lives without these pieces of anatomy.)
Sun's creation of a shared filesystem over a network became the core of its Open Network Computing (ONC) environment. Competing vendors decided it was easier to use a common standard than to create their own.
NFS was soon christened by the TCP/IP and Internet world as Request For Comment (RFC) 1094. Everyone had an approved DNA from which to make their NFS, and some purchased source code from Sun.
In the late 1980s, Sun's operation near Boston made a brilliant move in creating PC-NFS, which allows a DOS computer to access any NFS filesystem. Many system administrators took advantage of PC-NFS by putting cheap DOS and Windows applications on local PC hard disks, and placing user file storage on the large disk farms found on network servers. Users could share their files with Unix and DOS users alike, and forget about back ups.
Soon other vendors, including FTP Software and Beame & Whiteside, joined the market for PC-based NFS and TCP/IP applications and this in turn spurred the growth of Unix-based NFS servers as well. NFS went through one revision early on, and version 2.0 has lasted nearly 10 years. Designed for the LAN, NFS worked well using the basic Internet protocol known as the User Datagram Protocol (UDP) and was executed at the operating system level as Remote Procedure Calls (RPCs).
Although NFS over UDP works well for LANs, it is limited in some respects on Wide Area Networks (WANs). Performance starts falling through the floor because of the lack of properties available with Transmission Control Protocol (TCP), such as acknowledgments of packets received and dynamic packet size control. In the 1990s, there were many suggestions and a few implementations of NFS using TCP instead of UDP for improving service over WANs. Silicon Graphics released an NFS server that worked with UDP as well as TCP before Sun got into the game.
|
|
|
|
Not your father's NFS
NFS 2.0 suffers many shortcomings today. For example, Unix-based
servers are moving to 64-bit implementations, and the 8-kilobyte data
packet size bottlenecks transfers. Sun, Digital, IBM,
Hewlett-Packard, and Data General toiled with these and other problems,
and together they revealed NFS 3.0 in 1995 as RFC 1813. Implementations are scarce even today.
As SunWorld Online readers know well, the World Wide Web has become the people's choice for information distribution and sharing across the Internet. The bright glow of the Web outshined similar technologies. Unfortunately, the protocol for the Web -- HTTP -- leaves much to be desired in the performance department.
HTTP is a one-way, sessionless protocol that transfers multiple data formats inefficiently. Entire pages and all their contents must be transferred at the same time to the requesting browser. On the other hand, NFS works with only portions of files at a time, usually only the sections that are in use. It is possible to update sections of a file with NFS, a task virtually impossible with HTTP.
WebNFS: Everything old is new again
The progenitor has decided a new addition to the family
is required. Partway between the NFS concept and the way the Web
usually functions, WebNFS is the proposed new direction for NFS.
The NFS Universal Resource Location (URL) follows similar form to HTTP as shown below in several examples:
nfs://server:port/path nfs://mymachine.javaworld.com:2049/home/rawn/webnfs.txt nfs://mymachine.javaworld.com/pub/edit.doc
nfs replaces the http or ftp schema and needs to be implemented directly into the browser. The default port number is 2049. This is the NFS port for TCP connections. The directory structure shown above is actually a relative path from a base that the NFS server understands. Sun has made agreements with Spyglass and other vendors to add WebNFS to their browsers, and with Auspex to include WebNFS in its servers. Sun's latest Netra server includes WebNFS support, and SunSoft says it will include WebNFS in Solaris in late 1996.
Unlike previous versions of NFS, which work by mounting an entire filesystem at a time, WebNFS can communicate with individual files on the server. This feature is known as Multi-Component Lookup (MCL) and allows the client to look up a document based on a full given path to a file rather than having to look up individual components of that path until deriving the actual file location. For example, to look up a file like /bof/bar/snaf.txt in NFS, you have to look up the individual components (bof and bar) and find their offsets in sequential order before you can find snaf.txt. With WebNFS you simply pass the entire path to the server itself and have the server return the file handle directly; this improves performance by saving several steps of data transfers. You can still do the old style operation for NFS 2.0 clients supported by the server.
NFS 3.0 and WebNFS also support resolving files that are symbolically linked to a file in another location. Symbolic-linked files are used often in Unix and until recently did not have a corresponding concept in DOS or Windows. To allow for other systems that do not support this feature, it was left out of the specification itself.
Performance: Is WebNFS any better?
We need to compare the performances of WebNFS, HTTP, and FTP in
security, scalability, and additional features, to
understand the WebNFS's viability.
WebNFS follows the improvements in NFS 3.0 by including larger data transfers than the 8-kilobyte limit imposed in NFS 2.0; 64-bit data word size for files and filesystems larger than 4 gigabytes; and providing unstable data writes along with a COMMIT command to improve performance and allow some memory caching.
In contrast to FTP, the WebNFS protocol does not incur the overhead of establishing a connection, retrieving a file or some other FTP operation, and closing the connection. Each time you click on a hyperlink in a Web page that uses FTP, it performs these steps. This means multiple TCP connections since the FTP login and command protocol portion is kept on a different channel than the actual data transfer protocol portion of FTP. The WebNFS client only opens one connection to the server for all operations and caches server information until the user or the browser disconnects directly or by idling out. FTP also sends all data for the file in one long transfer. If the FTP connection breaks, you have to start from the beginning again, which can be time-consuming for large files. NFS maintains its location in the file for any transfer and can readahead into the file attempting to predict what the user may need from the file. Finally, each FTP session starts a separate server process rather than establishing a pool of servers to handle requests for HTTP or NFS.
Compared to a Web server and HTTP, NFS performs better in some aspects. HTTP has the same start-up overhead described in FTP. Although there are features such as a TCP session keep-alive monitor in some Web servers, this has only limited benefit in the sense that the user or browser has to specify how much data it will be sending in the future to determine the time limit that the TCP connection will remain open. Also, NFS does one better over HTTP by working with byte offsets into files rather than entire files at a time. If you have downloaded large files to view in your Web browser, you know how much time it takes to download the information, render it and then display the rendered layout. NFS works with the equivalent of paragraphs of information at a time, making this transfer process more efficient without incurring continual large data transfers.
According to Sun's measurements, using the WebNFS protocol over the HTTP protocol also makes sense for supporting large numbers of users at a time. The performance is of a scale three to one of WebNFS over HTTP. Additionally, the access times are cut in half.
Security
HTTP has no security, which is why Netscape's Secure Sockets Layer and
the Secure HTTP protocols were created. NFS has always had user-level
security. Now it also allows for the Kerberos security system,
and promises to support SSL as well as the Secure Key Internet Protocol
(SKIP), a new method for creating Virtual Private Networks over the
Internet. You can strictly control who has access to the information on
your server.
In most cases you will not need this security. Even with most Web servers, there are levels of ensuring security to a document either through the built-in basic password authentication system, or through a CGI script combined with SSL. In the workgroup and intranet environment, this may be very different in the future because the contents tend to be of a more sensitive level within the company.
Will we use WebNFS tomorrow?
Not immediately. Sun plans to include this as a standard component of
future Java libraries, and made it a part of ONC+ 2.0 in July. However,
there are competing proposals for what is termed HTTP-ng (HTTP: the
next generation) to the international World Wide Web Consortium (W3C).
Although the W3C is working towards global harmony of the Web, it seems
that the vendors have more power in the real world and their products
might become standards like Netscape
Navigator. A new protocol is knocking on the door to the Web, and it is
bringing its big corporate attitude with it.
|
Resources
About the author
Rawn Shah is vice president of RTD Systems & Networking Inc., a Tucson, Arizona based network consultancy and integrator.
Reach Rawn at rawn.shah@sunworld.com.
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-08-1996/swol-08-connectivity.html
Last modified: