Letters to the Editor

Setting the record straight on NT networks

Readers object to Career Advisor Edgar Saadi's unqualified appraisal of the NT network. Plus: Security columnist Peter Galvin runs across a real-world kink in Web server implementation; Adrian Cockcroft trades CPU stats with an AIX aficionado; and more

July  1998
[Next story]
[Table of Contents]
Sun's Site

Mail this
article to
a friend

Career Advisor, by Edgar Saadi

[Read Me]http://www.sunworld.com/swol-06-1998/swol-06-career.html



In last month's reply to the question from "Unixhead" about which is a better career path, NT or Unix, you said that an NT-based network requires less staff than its Unix equivalent. Do you have any hard data to support that statement?

It seems to me that a stable Sun install doesn't need much attention, just feed it tapes for backups. I agree that a simple NT install doesn't require much expertise (can you run setup?), but I've heard and read otherwise about large, complex, multi-server NT networks.

David Strom

NT network -- lo-tech, but run by an army of grunts


Though I agree with you that an NT enterprise doesn't require as much technical know-how as a Unix enterprise to administer, I've seen the opposite in terms of the number of people needed to run an NT network.

Most admin on the NT network is menial, i.e., rebooting and manual updates, so you end up with a small army of grunts and a few not-so-skilled people at the top.

Unix networks can also be managed with only a small number of people, though they have to be somewhat skilled to get maximum leverage out of the system (i.e., they must be able to write scripts and automate much of their work).

Robert Berger

Mark Mangan, who works with Career Advisor columnist Edgar Saadi, responds.

David and Robert,

With regard to WANs, you are correct: Unix has been around for nearly 20 years and is more effective than NT for managing large enterprises -- particularly when any of the administration must be done remotely. It's largely for this reason that Unix continues to dominate. As Collective Technologies CTO Jeff Tyler put it in a recent e-mail exchange:

"It's a learning curve thing. Unix has a steeper one because there are so many ways to skin the cat and it's not `friendly' to the novice. NT administrations is primarily GUI based (Microsoft has almost as strong a GUI fetish as Apple had about the Mac) and there are far fewer ways to skin any given cat under NT.

Unix is a more painful environment to learn (it forces one to learn how things work, not just how to work things), but is far richer in terms of tools. Once mastered, one can do damn near anything. [If] NT is a house ready to move into, Unix is like building your own."

When the network in question is a LAN, however, it's generally easier to administer if comprised of NT boxes. Jeff had this to say about the multiple-server NT net:

"Keep it simple and things are pretty good; try to do too many complex things at multiple locations on multiple boxes and it gets a bit more interesting. This is an area where NT will improve because it has to if it's really going to succeed in the enterprise and I believe that Redmond knows this and is actively working on it."

Thanks for your comments.

Mark Mangan

Security Q&A with Peter Galvin

"Web server wiles '98," parts 1 and 2

[Read Me]http://www.sunworld.com/swol-05-1998/swol-05-security.html [Read Me]http://www.sunworld.com/swol-06-1998/swol-06-security.html


This is the best coverage of real-world Web server implementation I have ever read. I was just beginning to think about delving into a chrooted environment when I heard that some Web hosting companies now offer individual virtual server environments, complete with most of the regular goodies.

One problem that has been bugging me, however, is the fact that the chrooted environment requires that I have all of my virtual host's DocumentRoot directories underneath the chroot directory. To me, protecting those directories is the highest concern, as many of these people use clients that have hard-coded modes on world-writable uploaded files. Cronned scripts and cfengine help solve the problem, but some people insist that the space is their property and that they don't need any mode twiddling on my part. It would make me feel better to put these files in a padded cell, but this seems difficult to do without separating them from the server tree itself.

Anyway, I thought maybe you could give me the, but-of-course perspective, which is eluding me at the moment.

Rick Robino


Unfortunately, I can't think of an easy way around this. The best I can come up with, for static Web images only, is to write them to CD-recordable, then mount that under the chrooted directory on your Web server.

Peter Galvin

[Read Me]http://www.sunworld.com/common/swol-backissues-columns.html#security

Customizing security in Solaris 2.5.1 and CDE 1.0.2


I have two questions regarding Solaris 2.5.1 and CDE 1.0.2:

  1. I created a loginlog file as follows, per the System Admin Guide:

    	touch /var/adm/loginlog 
    	chmod 600 /var/adm/loginlog 
    	chgrp sys /var/adm/loginlog

    The file seems to keep track of remote login failures, but regular login failures through the CDE aren't being tracked. How can I redirect CDE login failures to the loginlog file?

  2. I would like to implement a security procedure on my Sun 5000 server such that a user account freezes after five failed login attempts in a row. How would I set this up?

Jeff Tkacheff


According to the manual page, regular login failures are logged via syslog at the LOG_CRIT level. You can modify the syslog.conf to have it redirect errors of that level to the loginlog log file.

As for your security procedure, I believe the ASET tool that comes with Solaris provides some customization possibilities. Your other option is to move to Solaris 2.6, where you'd have access to PAM (pluggable authentication module) functionality. If you're interested in reading about PAM and other security features in Solaris 2.6, check out my December '97 column.

Peter Galvin

Performance Q&A: "How busy is the CPU, really?" by Adrian Cockcroft

[Read Me]http://www.sunworld.com/swol-04-1998/swol-04-perf.html [Read Me]http://www.sunworld.com/swol-06-1998/swol-06-perf.html

AIX, more accurate for CPU measurement?


In your recent discussion of under-reported CPU, you mention that other Unix systems exhibit the same problem. I'm new to Solaris, but it has one characteristic that seems to increase under-reporting: Threads requesting to wait until a timeout (say, using usleep(), poll(), or pthread_cond_timedwait()) receive control just after the kernel 10-millisecond (ms) clock interrupt, and so may not be running when the next 10-ms interrupt occurs. On AIX, application timeouts can be triggered due to clock interrupts outside of the kernel 10-ms timer. This means that the timeouts will be spread out a little more for a mix of threads waiting for work, relative to the 10-ms clock interrupt, so the threads are more likely to be charged with CPU.

If I could make Solaris behave like AIX as far as the granularity of the timeouts I'd be happier. On AIX, over-reporting is occasionally a problem. An application may have a 20-ms polling interval and do very little work. Timeouts may drift slightly over time, relative to the 10-ms timer. You might see 0-percent CPU for several hours, then a switch to 50-percent CPU for several hours when the applications timeout occurs just prior to the 10-ms timer. Setting the timeout to 19 ms or 23 ms helps.

John Garate


I'm not clear on how AIX does this. Does it have two separate 10-ms timers, or one for the clock and a different (1-ms?) resolution for timeouts?

Solaris 2.6 can be set to have a 1-ms clock interrupt, but microstate accounting does provide accurate data, so it's just a matter of using that facility if you need accurate measurements.

Solaris is closest to the Unix generic in this case. The underpinnings of AIX are quite unlike most other Unixes. Also AIX is tied to hardware features, and Solaris and generic Unix do not assume that there are spare hires, timers, and clocks.

Adrian Cockcroft

John Garate replies:


AIX provides a kernel service, tstart(), for specifying a timeout value and a routine to be called at interrupt level (AIX 3.2.5). tstart() maintains an ordered list of timeouts and manipulates a special time-decrementer (nanosecs) register. Included in this list of timeouts would be the kernel 10-ms time-slicing timer. When the hardware decrements the register to zero, a clock-interrupt occurs, the handler removes the expired items from the head of the list, and the next timeout value is placed in the decrementer register. The routines associated with the expired timeouts are called, and would typically awaken processes for execution.

However it works, AIX somehow enables values other than the every-10-ms-timeout value to be placed in the special timer-decrementer register. This allows timeouts to occur with more flexibility. I've looked at many AIX traces and seen many clock-interrupts other than those for the kernel 10-ms slicer.

There are routines for delaying N ticks, so I guess some timeouts can be tied to the 10-ms ticker.

John Garate

Adrian's response:


There was some discussion of whether to implement this in Solaris to help realtime, but they decided that increasing the tick rate was sufficient for what the users wanted and far easier to implement.

I still don't see how you get the 50-percent or 0-percent measurement effects you mentioned before.

Adrian Cockcroft

John's reply


Suppose my thread has a loop, calling poll() with a 20-ms timeout. Imagine no other events wake up the poll() call. Since AIX allows the timeout to occur at times other than the time-slicer timer, my thread's first wakeup will be some time D after one 10-ms pop, and 10-ms-D before the next. Let's say my thread runs quickly and is finished before the next 10-ms pop. It won't get charged any CPU. This will also happen 20 ms later when it awakens again. If nothing else is running, vmstat would show 0 CPU. Now, since my thread does take a little bit of time to do its thing between poll() calls, but always provides a 20-ms timeout value to poll(), the actual timeouts of the call will be 20 ms plus that little bit of time apart.

Eventually my thread's timeouts may drift to be just before a 10-ms pop for a little while, so my thread gets charged CPU on every one of its 20-ms timeouts, and vmstat would show 50 percent utilization. The slight drift continues so that the cycles are out of phase, and vmstat shows 0 CPU again. Depending on the drift rate, the 50 percent utilization"could last a long time (and 0 percent would last even longer).

Here are two cycles. If y occurs just before x, the CPU line shows a C, meaning y is charged CPU.

     x    x    x    x    x    x    x    x    x    x    x    x    x    x
     y     y     y     y     y     y     y     y     y     y     y     y

CPU: C    -    -    -    -    C    C    -    -    -    -    C    C    -

I may not be explaining it well, but I've seen this a couple of times over the years.

John Garate

Inside Solaris Q&A with by Jim Mauro

[Read Me]http://www.sunworld.com/common/swol-backissues-columns.html#insidesolaris

Mapping virtual objects into shared memory


Where can I look to find out how to put C++ objects with virtual functions into shared memory? The problem I've encountered is that the vptr is in the shared memory, but it needs to point to different locations for each process.

Is there a way to guarantee that the vtables are at the same location for each process?

Tom Hood


You may be able to solve your problem with intimate shared memory (ISM). Read my September 1997 Inside Solaris column for specifics on what ISM is -- basically it provides for the sharing of the translation table entries (TTEs) that are involved when mapping virtual addresses to physical addresses. When processes map to the same shared segment using ISM (which requires setting the SHM_SHARE_MMU flag in the shmat(2) call), the mapping has the same virtual address for all the processes.

While I'm not a C++ expert, I'm familiar with the traditional Unix/C development environment. Would it not be easier to simply create a shared object library of your C++ functions and dynamically link them into your code? This way, you could simply invoke your C++ methods from code, and let the runtime linker worry about the mappings. As a shared object library, it wouldn't be replicated for all the processes running.

The Solaris Linkers and Libraries Manual documents how to create shared object libraries.

Jim Mauro

What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough

[Table of Contents]
Sun's Site
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-07-1998/swol-07-letters.html
Last modified: