[Letters to the editor]

Readers speak out:
June Letters to the Editor

[Table of Contents]
[Sun's Site]

Send letters to sweditors@sunworld.com


To Dave Kosiur:

I recently read the Encryption Primer you authored on the SunWorld site, http://www.sunworld.com/swol-03-1997/swol-03-encrypt.html. Under the section discussing key recovery ("Big Brother is watching you"), you included a reference to Hewlett-Packard's International Cryptography Framework (ICF) which is misleading.

Cryptographic hardware elements using the Framework architecture have no dependence on key recovery or key escrow to achieve exportability from or importability into supporting countries. Given the context, the statement that "Hewlett-Packard has proposed its International Cryptography Framework (ICF) as a solution to key escrow" incorrectly implies that the framework is a key escrow solution.

The Framework utilizes patented architectural elements to address the issue of exportability. The Framework architecture enforces a separation between policy and cryptographic method, rendering the crypto hardware inoperable until appropriate policy tokens are present.

This feature frees Framework enabled hardware from export/import controls, and consequently from key recovery or key escrow requirements.

The Framework architecture does have the flexibility to support key recovery (among other cryptograpy management policies) IF key recovery is a business or local government requirement.

The important distinction is that key recovery is not an inherent component of the Framework architecture and Hewlett-Packard is not promoting the Framework as a key recovery solution.

An overview of the architecture is available at http://www.hp.com/go/security.

Thank you,
James Kniskern
International Cryptography Framework Product Manager


You're quite right. I hadn't meant to confuse HP's ICF with schemes for key escrow. As you said in your letter, the issue ICF helps address is exportability. I linked key escrow requirements with the issue of exportability in the wrong way in my article.


The article has been updated for accuracy. --editor


Future Wizard predicts networking 2007

To Rawn Shah:

I've been reading your column, http://www.sunworld.com/swol-04-1997/swol-04-connectivity.html, and would greatly appreciate any comments on what you think will happen in networks over the next five to 10 years.

John Sullivan
Southampton University

OK John, let me put on my far-seeing wizard's hat and think about it...

Five to 10 years is a long time for development in the computer industry, but not in networking. Until this decade the networking industry was fairly straightforward and there's been little change. In the local area, Ethernet and Token Ring have lived long and prospered; in the wide area, T-1s and T-3s have been around for well over 20 years. With the exception of a few additions like Frame Relay and wireless networking, the way networks looked did not change through most of the 1980s and early 90s; not until 1993.

In the past five years a whole new group of real technologies have become common, and it looks like that trend will continue. We're in a transition period, which makes me hesitant to predict the next five to 10 years, but here are some good guesses:

New technologies that will survive: ATM, Fast and Gigabit Ethernet, and more wireless communications.

In five years plastic-based fiber optics will become a reality; this will reduce the cost of network cabling to below that of copper today, while providing OC-1 to OC-3 (52 to 155 Mbps) speeds. As this happens, ATM-like technology will become more commonplace in the local area, while glass optical fiber (what we have now) will be the mainstay of long-haul/wide area networks.

As I speak, AT&T/Lucent has in its labs fiber optic lasers which can transmit up to 25 Gbps. By 2003 we'll probably see this and even higher speeds in use, and by 2007 we may even be seeing 100 to 150 Gbps coming into style.

Phone and data service will run over the same technology: ATM, though they may have separate networks globally. And we can expect to see a third network emerging: the digital television net.

Modems will still be around, but as museum pieces. Digital communications/modems of the DSL sort may become more commonplace. I predict that ADSL (6 Mbps) becomes common by 2000 and VDSL (52 Mbps) by 2005. 2005 will also be the year that all broadcast television signals are supplanted by digital transmissions. VDSL will provide the speed.

And, just to toss in a wildcard, expect a new digital technology to surface sometime in that time frame... Yes, I'm being vague.

Hope my vision helps,


Pursuing Unix and Windows NT coexistence

To Dave Herdman:

FYI, the best way to share files between Unix and NT is a Network Appliance file server. It can serve the same file system via both NFS and CIFS.

The best X display for NT is Graphon's Go Global. It's very fast, even over a phone line. This is what Sun will use for X emulation on NCs (rewritten in Java).



Thanks for the FYI.

I tend to avoid terms like "best" and "only" as they have fairly short currency in any technical area. On the other hand I appreciate all input as we clearly work in one of the fastest changing areas of the computer industry. I would be interested to know exactly why you think your suggested method is the best way of file sharing between NT & UNIX in terms of functionality, performance and standards compliance.

I am afraid I have been in this business too long to allow anyone to make such a statement without a backup argument...I fully accept that there could be a major new product development that I have missed (and if this is so mea culpa), but I need the technical explanation. I did not mention Graphon's X display server, but I would be interested to hear about its strengths and shortcomings.

In the interests of fair play, I also need to know what affiliations you have to any corporation, as an employee, shareholder, or consultant.

I look forward to your reply.

Dave Herdman

Network Appliance gets the message out

To Dave Herdman:

I just read your article, http://www.sunworld.com/swol-05-1997/swol-05-porting2.html. Very good piece of work!

I won't quibble that you overlooked what Network Appliance is doing in this area (it's our job to get our message out to the world). However, I think you might find it interesting. We have a native implementation of NFS and CIFS that allows us to do things that are difficult for protocol emulators like Samba (e.g., enforce CIFS locks - even for NFS clients).

If you are curious, I wrote a Tech Report that describes the differences between the two protocols, explains our implementation, and contrasts it with other approaches: http://www.netapp.com/technology/level3/3014.html.

Andy Watson
Director, Technical Marketing
Network Applicance


To Adrian Cockcroft:

In one of your presentations you mention setting up DNS to round robin entries to balance the load on multiple servers. I was wondering if you could go into more detail on how this is done? In addition, you mention specifying multiple names for the same address, how is this accomplished?

Marc McKernan


First, you need DNS version 4.9.3 or later; you then enter multiple A records for the same name but with different IP addresses. The DNS server will return each IP address in turn when it is consulted.

Try to find the person who maintains your DNS service, s/he should be able to figure it out. If you have a Sun with an older DNS release, there are a set of patches for Solaris 2.5.1 that upgrade to DNS 4.9.3 as part of a security fix.


Swap files vs. partitions

To Adrian Cockcroft:

I've been curious for a while about the performance impact of using swap files vs. using partitions. In Brian Wong's Configuration and Capacity Planning book he mentions that the performance impact is less than one percent. If this is the case, the administrative flexibility of being able to resize swap areas on the fly is probably worth the negligible performance degradation. Is there any reason admins shouldn't be tempted to use swap files instead of partitions?

Bill Hathaway


You can remove and resize files and partitions, but files waste space (up to 20 percent) as there is the inode space and the 10 percent minfree on a default filesystem setup that you can't use. It's up to you. No big deal either way.


Swapped out, but inactive

To Adrian Cockcroft:

I have a problem with poor performance on 2 Sun machines, a 1000E and a 2000E. On both machines I see a lot of processes swapped out but nothing on the run queue. What should we look for, and is it related to the shared memory setup?

The vmstat output is as follows:

secsu001 # vmstat -S 5 10
 procs     memory            page                disk        faults        cpu
 r b w   swap  free  si  so pi  po  fr  de   sr  s1 s1 s1 s1 in  sy   cs   us
sy id
 0 0 6   8976  3376   0   0 235 44  63  2396 13  2  3  2  2  95 1865  245  8 
5 87
 0 0 10 273928 6260   0   0  0  33  48  1344 11  0  0  0  2  56 1576  182  3 
3 94
 0 0 10 273908 6228   0   0  3  1   1   3636 0   0  0  0  0  55 1623  192  3 
3 94
 0 0 10 273732 6296   0   0  0  0   51  4484 31  0  0  0  0  56 2039  218  5 
6 90
 0 0 10 273788 6360   0   0  0  0   0   2660 0   0  0  0  0  43 1215  148  1 
1 98
 0 0 10 273756 6328   0   0  0  3   3   1544 0   0  0  1  0  73 1861  152  3 
3 95
I would be very glad if you had the time to respond to this.

Thank you,
Pierre Le Roux
Systems Engineer
Q-Vector South Africa


Use the ps -elf command to look at the processes, the flags F indicate whether the process is in memory (bit=8) or not. If (F & 8) is clear, the process is swapped out (see ps manpage for more details). You should find that the swapped-out processes are inactive ones that you don't care about; vmstat w is not a problem indicator.


Reclaiming memory

To Adrian Cockcroft:

I just read your article on free memory, http://www.sunworld.com/swol-05-1997/swol-05-perf.html, and I'd like your opinion about a script I give my students to "reclaim" memory by raising lotsfree and slowscan for a couple of seconds and then putting the values back to the defaults.

This approach means lotsfree is not always high and causing more continuous scanning by pagout, and also avoids the problem of free memory eventually getting too low (at lotsfree).

What do you think about running a script like this every 15 to 30 minutes through cron? It could also run conditionally only if the load was adequate.

David Rubio


Reclaims are much slower than a single memory read. The problem with your script is that it forces pageouts that are unnecessary and adds to the load. Though it's OK on a system that has an excess of RAM, on a system that had just enough it would be very bad, and in either case it should not be necessary.

If you are about to start a very big job and you want it to complete quickly, use the script to make space in RAM in advance. That's the only use I can see for it.


Solaris 2.5 paging algorithm

To Adrian Cockcroft:

I've read your book (Sun Performance and Tuning), or some chapters of it, and I think it's really interesting and useful for those of us who work in performance and tuning. I've also been reading your column.

I'm working now on the performance of Solaris 2.5, and I'm very interested in the paging algorithm. Changes were made to this algorithm, but they are not documented in your book or in the answer book.

Where can I find more information?

Thanks in advance,
Mariela J. Curiel


I discuss changes to the algorithm in my May 1997 column entitled "The memory go round" http://www.sunworld.com/swol-05-1997/swol-05-perf.html.


Sizing a Sun box

To Adrian Cockcroft:

I've been searching the Sun site for information on sizing a Sun box for a database application. Your articles come up often, so I thought I might run this by you.

We have transaction rate information, database table sizes, and know roughly what the application does(!) -- Any suggestions on how we turn that into a recommended Sun box? Also, do you have any benchmark information that might be relevant?

Thanks in advance,
Chris Needham
Consultant, CSC Australia


You need Brian Wong's book, Configuration and Capacity Planning for Solaris Servers. It has a whole chapter on how to do this. I reviewed it in April's column, http://www.sunworld.com/swol-04-1997/swol-04-perf.html.


Memory intensive behavior

To Adrian Cockcroft:

We run an application which performs large image data reads (four megabytes) for one-gigabyte+ file sizes. We find that the first read pass is much faster compared to successive reads of the same file (40+ megabytes per second dropping to about 12 megabytes per second after a couple of reads). Is there a way to invalidate or purge the cached data for successive reads?

Ed Soltis


Check out my column on memory intensive behavior,http://www.sunworld.com/swol-05-1997/swol-05-perf.html.

Increasing lotsfree may help your workload. Also, you should be able to use direct unbuffered I/O with VxFS; VxFS is far better at big files. Sun will soon pricelist Veritas VxFS filesystem as an option for this kind of workload.


Unix Enterprise

To Harris Kern and Randy Johnson:

My partner and I are enjoying your book and are interested in the auto sysconfig script, which is floating around somewhere at Sun (we both work for Sun in Tampa, FL.), but we can't find anyone who recognizes it. Can you tell us more?


Steven Whitaker and Steve Sobecki
System Support Engineers
Sun Microsystems,Inc.

Hi Steve,

I guess you're reading our second book, Managing the New Enterprise. The autosysconfig script was my favorite script of all, but unfortunately I have not kept in-touch with anyone in IR at Sun. The key players involved with that script are long gone, but the inventor of it, Andrew Law, is a personal friend (and coauthored the book you're reading). I'll ask him to drop you a line.


To Harris Kern and Randy Johnson:

I was very happy to find your columns in SunWorld and to learn of your book which I have ordered from Amazon.com. I'm an old mainframe guy, and find myself shaking my head at the practices I see in Unix shops running corporate mission critical applications. The Unix guys smile warmly, telling me I just don't understand the Unix world, where mainframe systems management stuff doesn't apply. But I don't agree; I feel they are putting the business that pays their salaries unnecessarily at risk for the sake of the false concept that Unix data processing is somehow different than all other kinds. So I'm very happy to see someone writing about production systems management, and the process of putting an application into "production" status.

Tom Schweich

Call for reader responses

Anyone out there?

To the editors:

It's so easy to connect to an ISP via modem using Windows NT, 95 etc., and so DIFFICULT using Solaris. I am unable to get support from Sun. Being an individual, I am unable to pay the service fee they request. I was successful connecting to my shell account using free software called SEYON, but am unable to connect to the Internet with it because I need to somehow register my host on my ISP's network, but do not know how. I bought Solstice PPP 3.0.1 hoping to be able to configure it to connect, and I'm able to dial out, but unable to configure it so that it provides my menu selection, ID, and password.

Can ANYONE out there help me PLEASE???

Igor Maranslicht

Looking for a few good mailing lists

To the editors:

I'm looking for some good mailing lists to subscribe to for Solaris 2.x administration. My main uses for Solaris are mechanical engineering and database applications.

John Welty
Manager of Corporate MIS/IT
Met-Coil Systems, Inc.

Any suggestions for Igor or John? --editor

Send letters to sweditors@sunworld.com

[Table of
Contents]  [Search]  [Sun's

[(c) Copyright 1997 Web Publishing Inc., and IDG Communication

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-06-1997/swol-06-letters.html
Last modified: