Letters to the editor -- SunWorld, August 1996">

Click on our Sponsors to help Support SunWorld
Letters to the Editor

Letters to the editor

August  1996
[Next story]
[Table of Contents]
Subscribe to SunWorld, it's free!

Mail this
article to
a friend

Letters to the editor

Performance Q&A questions and answers

Editor's note:See Adrian Cockcroft's frequently asked questions for more answers to reader questions.


We have a SPARCstation 20 with dual Ross HyperSPARC 125Mhz cpu modules, 256 megs of memory and 400 megs of swap.

The server is being used as a mailhost for incoming mail, and pop-3 processes. The mailspool is part of the user's homedirectory (currently we have ~34,000 users for which mail is coming into the system), which is mounted via NFS v3 from a Network Appliance FA330 (10meg switched to 100 mbit from the nfs server).

The problem is that an mpstat output indicates that both cpu's are pegged and are 0 <= idle% <= 10 . And here's an output of Adrian during the problem phases of today:

Adrian detected CPU overload (red): Mon Jul 29 14:12:23 1996
CPU overload, add more power or quit some programs
  procs        memory         page             faults              cpu
 r  b  w    swap    free    pi  po  sr   in    sy    cs  smtx us sy wt id
28  0  0  353316  124168    67   0   0 1981  3005  1166   285 41 58  0  1

I cannot figure out what in the world is chewing out both cpus. The number of processes running isnt as much either, ~100 sendmails and ~50 ipop3d processes. During these times too, the loadaverage is above 30 and the system is rather slow to respond :(

I checked the disk utilization with iostat, but the svc_t are all below 20 ms, and the single disk is not overloaded. `ps -ef' doesn't indicate any outrageous processes either (unless I am interpreting it wrong).

I have seen other servers with similar configurations running with lots of idle time , low load averages, very responsive and just as many processes. I cannot for the life of it figure out what is going on here :(

I am including some statistics that I obtained for your reference below. Please let me know if you would like any further statistics.. I would highly appreciate it if you can help me out here. Thanks for your time, patience, co-operation and prompt reply.

(inetd also reports 'Protocol Error' in syslog for some reason.)

--Mohan, (firm indeterminate)

  • Adrian Cockcroft responds:

    You are definitely out of CPU power, nothing else looks like a problem.

    The first thing to look at is which processes are burning up the CPU. This is actually the subject of my next SWOL column, out on thursday.

    You should turn on system accounting, then look at the daily summary to see where the load is coming from. There is no extra CPU load, just a 40 byte write to /var/adm/pacct for each process that exits.

    Use acctcom -b to watch it in action, and add the normal summary cron jobs to the "adm" crontab. Then do /etc/init.d/acct start (hardlink link it to /etc/rc2.d/S22acct as well).

    0 * * * * /usr/lib/acct/ckpacct
    30 2 * * * /usr/lib/acct/runacct 2> /var/adm/acct/nite/fd2log
    30 9 * * 5 /usr/lib/acct/monacct
    # acctcom -b | more
    COMMAND                           START    END          REAL     CPU    MEAN
    NAME       USER     TTYNAME       TIME     TIME       (SECS)  (SECS) SIZE(K)
    #sh        sys       ?            11:20:00 11:20:00     0.17    0.04 1246.00
    sadc       sys       ?            11:20:00 11:20:00     0.06    0.03  629.33
    date       sys       ?            11:20:00 11:20:00     0.01    0.01  640.00
    ls         root      pts/6        11:19:56 11:19:56     0.01    0.01  648.00
    ls         root      pts/6        11:19:30 11:19:30     0.02    0.02  532.00
    crontab    root      pts/6        11:18:33 11:18:33     0.08    0.02  896.00
    acctcom    adrianc   pts/6        11:17:32 11:17:37     5.30    0.29  747.59
    Jul 26 09:30 1996  TOTAL COMMAND SUMMARY FOR FISCAL 07 Page 1
    % more /var/adm/acct/fiscal/fiscrpt07
                                         TOTAL COMMAND SUMMARY
    COMMAND   NUMBER      TOTAL       TOTAL       TOTAL   MEAN    MEAN     HOG     
       CHARS     BLOCKS
      TRNSFD      READ
    TOTALS     13458    45996.08     15.23    288963.68 3020.73   0.00  0.00  
    23677 86624      41330
    netscape       2    35396.95      4.05     24619.01 8733.88   2.03  0.00  
    15851 84736      13742
    axmain         7     2093.13      0.21     14487.77 9912.20   0.03  0.00    
    104 15114        145
    sh          3181     2029.41      1.72      4983.22 1182.41   0.00  0.00     
    26 60742         28
    zcat           2      573.01      0.63         1.80  903.81   0.32  0.35   
    1624 63744          0

    See what it tells you. Maybe compare with another system.


    Hello, just looked at your excerpts, looking for the magic incantation to turn off swap space usage. I haven't set it for years and can't remember what the entry is:

    The last time I did this it was for an SC1000 with 256MB, and this time it is for a UltraServer 5000 with 1GB :)

    I searched the Answerbooks, FAQ's and newsgroups, but nothing yet, do you recall? BTW, do you have a list of all the /etc/system keywords/variables that can be set?


    --Scott Hinnrichs, (firm indeterminate)

  • Adrian Cockcroft responds:

    # swap -l
    swapfile             dev  swaplo blocks   free
    /dev/dsk/c0t0d0s1   32,1      16 1638544 1222000
    # swap -d /dev/dsk/c0t0d0s1

    Bingo! no swap space....

    You may be thinking of the SunOS 4 parameters, I can't remember it either but there is no need for it in Solaris 2, just delete the swap spaces with swap -d (takes a while to clean out the data) and you are done.


    I've just started reading (and really enjoying) your Performance Q&A. In your articles, you regularly mention about "walking the kernel tables" to get the statistics about private size of processes.

    Is there a tool available Adrian to display it or it is something that we have to develop on our own. It seems like if you can provide them to the rest of us, then it would save us a lot of time and candle :-)

    --Tri Phan, Network Designs Corp.

  • Adrian Cockcroft responds:

    There is no tool, and no way to build one. We added prototype code to the kernel that collects new data, but the code breaks with each new OS release and some patch releases, so it needs to be adopted by SunSoft and it is too dangerous to give away (unless you like kernel panics).

    The SunSoft engineer responsible likes the code, and next time /proc is worked on (probably not the next OS release) it may be added.


    I have already put the questions below to our local Sun support group. However, up to now I have not received a complete answer. Perhaps you can help.

    Sorry for the length. Just jump to the summary to see the basic questions we have.

    1. Background

      We have developed a client/server application on Sun Solaris V2.4. The frontend requires a large amount of memory, as it is caching a large amount of data. In order to calculate how much virtual memory ( physical memory + swap space) we require we have used Proctool and other tools to measure the application memory usage. The other question we need to answer is how much virtual memory is consumed by the kernel itself?

    2. Measuring Process Memory Usage

      We are running multiple copies of the same process, and all processes are using the same shared libraries. To measure the virtual memory used by the first process we noted the virtual memory available, as reported by Proctool, before starting the process. We started the process and noted the new virtual memory available. We took the difference between the two values as the amount of virtual memory consumed by the process. Looking at the memory map for this process, using Proctool, seemed to confirm the process total size.

      We then started a second copy of the process. The decrease in virtual memory was less this time. This was expected as some of the code will be shared with the first one.

      We believe that these estimates are probably too large. This is because on a production system, when physical memory is short, some of the code will not occupy any virtual memory. This is because it can be paged back in from the executable files/libraries on disk. The code maintained in memory will be reduced to the process "working set", that is the more frequently referenced pages. If we could anticipate a reasonable working set size, this could be built into our memory calculations. A rule of thumb has been suggested of 20 % of the code.

      To determine the amount of memory consumed by dynamic data, we used the increase in process heap size after data was loaded.

      We are confident regarding predictions for memory consumption by data. However. the base consumption by executable code and static data seems harder to estimate. Do you think the approach is sound or are there any alternatives that you can suggest? One worry is what is really meant by available virtual memory. This is discussed as part of the next section.

    3. Estimating Kernel Memory Consumption

      Our first stab at this was to look in the boot record to determine how much memory is top sliced for kernel code and static data. Then we used sar -k to determine the amount of kernel dynamic data.

      Here are some results for a 96 MB SPARCstation 20 running 2.4:

      Jul  9 17:53:13 sun15 unix: mem = 98304K (0x6000000)
      Jul  9 17:53:13 sun15 unix: avail mem = 86564864
      12:02:44 sml_mem   alloc  fail  lg_mem   alloc  fail  ovsz_alloc  fail
      12:02:49 1191936 1051504     0 3858432 3705792     0     3133440     0
      12:02:54 1191936 1051504     0 3858432 3705792     0     3133440     0
      12:02:59 1191936 1051504     0 3858432 3705792     0     3133440     0
      Average  1191936 1051504     0 3858432 3705792     0     3133440     0
      Thus top slice = 13.4 MB
           dynamic = 7.8 MB
      We had been told that sar -s ( used + available ) is the total virtual memory available to processes.

      This is: total: 39252k bytes allocated + 19372k reserved = 58624k used, 398396k available

      Thus the total virtual memory available = 446 MB

      The swap space as shown by swap -l is = 391 MB

      swapfile             dev  swaplo blocks   free
      /dev/dsk/c0t3d0s1   32,25      8 392152 391160
      /opt/swapfile1        -        8 409592 407312

      Thus the total physical memory + swap space is 487 MB. If we take out the memory allocated to the kernel, this gives us 455.8 MB. This is about 10 MB more than the figure reported by swap -s. This leads us to a number of questions.

      • Is it really true that swap -s reports all available virtual memory. It seems a bit short.

      • If a) is true, were has the missing memory gone? Has it been taken by the kernel? Perhaps to allocate space to loadable modules?

      • Anyway the kernel dynamic allocation can grow and shrink. Thus it is probably not correct to assume it is already included in the virtual memory mot reported by swap -s. It will add/subtract from the available virtual memory. Is part preallocated and part dynamically allocated?

      • On a large system with 512 MB of physical memory, and 256 MB of swap, both vmstat and Proctool were reporting more available physical memory than virtual memory. If available virtual memory is supposed to be the sum of available physical memory and swap space, how can this be true?

      • The fundamental question from which all these questions arise is, how do we calculate the amount of memory taken by the kernel? Then we can determine how much space is left for user/system processes.

    4. Summary

      This rather long question can be summarized as follows.

      • What is the best way to measure process virtual memory consumption?

      • How do we calculate how much memory will be consumed by the kernel. We would prefer to be able to do this before buying a system or upgrading memory, rather that empirically.

    --Paul Piercy, Rolfe & Nolan PLC

  • Adrian Cockcroft responds:

    Your questions are based on some shaky assumptions, that's why things don't add up.

    I think I've covered what you need to know in my online articles. Take a closer look at my earlier columns:

    The key thing you seem to be missing is that RAM is not only used by process and kernel address spaces, RAM is used to hold data that is not part of any address space. The filesystem cache for example, also data in /tmp occupies swap space, and the amount of RAM that contributes to the swap space is not easy to calculate, it depends on many other things.

    You do seem to be heading in the right direction, and I think you have enough data for the sizing you need to do. Don't worry about making it all add up exactly.


    (Regarding Adrian's July column "How can I optimize my programs for UltraSPARC?":)

    The SPARC V8 architecture manual says:

    Section 1.5, "Conformability to SPARC":

    "An implementation is *not* required to execute every instruction in hardware. An attempt to execute a SPARC instruction that is not implemented in hardware generates a trap. If the unimplemented instruction is nonprivilged, then it must be possible to emulate it in software. [...] Appendix L [...] details which instructions are not in hardware in existing implementations."


    Section G.6, "Instruction Set":

    "The SPARC ABI (Application Binary Interface) is based on the full SPARC instruction set, per this version of the SPARC Architecture Manual (v8). An ABI-compliant system correctly executes any (nonpriviliged) SPARC instruction in a user application program. Other than execution speed, it is transparent to the user application program whether instructions are executed in hardware or trapped and emulated in software".

    I have understood this to mean that even if your compiler emits instructions for, say a v8 implementation, a v7 implementation cpu will still be able to run the same binary (albeit more slowly). And I have verified that this is indeed true.

    However, after reading your recent article in sunworld online, you describe a process for creating an optimized-for-ultrasparc binary which you claim would not be runnable on non-v9 machines.

    Has the SPARC ABI been changed/broken with the advent of v9? Why are the special v9 instructions not emulated in software as was the case between v7 and v8? Is there a resolution to these conflicting cases?

    Regards, and thanks for a good article.

    --Nick Gianniotis, (firm indeterminate)

  • Adrian Cockcroft responds:

    I think the standard SPARC ISA is actually the SPARC V8 ISA

    The OS has enforced this since SunOS 4.1 on all SPARC platforms.

    The extensions are tagged in the ELF header as shown by:

    % file stream*
    stream_dp:	ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not
    stripped stream_dps:	ELF 32-bit MSB executable SPARC32PLUS Version 1, V8+
    Required, UltraSPARC1 Extensions Required, dynamically linked, not stripped
    In this way it is clear that they don't run if they require certain extensions. Its like requiring a framebuffer that has 24-bit double-buffering and a Z-buffer, it won't run on a GX.


    Hi. I apologize for the length of this note; it just seemed to grow.

    I'm a systems performance analyst who's working on a system which includes developed 'C' software applications. The system is being ported from AIX 3.2.5 on an IBM RISC System 6000 to Solaris 2.5.1 on a Pentium Pro. Most of my experience has been in the IBM world, not Sun. I've bought your book (i.e., contributed to your Porsche fund) and am also reading the back issues of your SunWorld Performance Q&A column. I am impressed with your obvious depth and breadth of knowledge in this area.

    Your book's Chapter 2 (Performance Measurement) describes taking system-level measurements with some workload applied. In addition to that type of data collection, I am also interested in collecting 'stand-alone' pathlength measurements -- not 'system under load' but rather individual component paths (pure CPU execution free of contention or I/O delays). These various pathlength timings are used to help calibrate performance prediction models which project system-level CPU loading and response times under various workloads.

    We use the AIX Trace Facility to measure application CPU processing pathlengths, e.g., different processing paths within a process. Let me describe further in case it's not obvious what I mean.

    AIX provides a standard set of kernel and system function trace hooks, plus programmers can define user trace hooks in their code. Trace data collection is run with all or a selected subset of these trace hooks turned on.

    The AIX Trace facility generates a time-ordered log of trace entries. The trace start date/time is provided and each entry includes the elapsed time (in nanoseconds) since the previous event (i.e., hook execution).

    A process's CPU pathlength between a particular start/stop hook-pair can be isolated by excluding any intervening CPU processing due to interrupt-preemption; this can be determined if the set of hooks for kernel interrupt and process dispatching events are also included in the trace. Our project has a home-bred program which post-processes the (voluminous) trace output to report CPU pathlength statistics (e.g., execution counts, min/max/avg values, std dev) for specified start/stop hook pairs.

    It seems to me that the need to collect pathlength measurements is not uncommon. From what I can see so far (please correct me if I'm wrong), it doesn't look like the truss command (trace) generates time-tagged entries. I am hoping you can put me in touch with someone (if not yourself) who's done this sort of pathlength timing analysis in the Solaris environment.

    --Jocelyn Weinberg, Lockheed Martin

  • Adrian Cockcroft responds:

    it sounds as if you really know what you are doing. I'd be interested in any measurements and models that you can share. I expect you will find some anomalies that need to be explained along the way, so feel free to drop me email...

    Luckily, since Solaris 2.5 (hence post-book) Solaris 2 has also had trace capability that should do what you want. Its not that easy to use but is essentially the same as the IBM trace system.

    The place to start is the prex(1) manpage, this controls the probe points that you want to enable.

    The trace facility is called TNF, as the output file is in a format called "Trace Normal Form" that is a self describing data structure.

    TNF probes can be placed in user level code, and will be added to standard system libraries in some cases. The kernel probes are fixed, but there seem to be enough to give good coverage.

    The biggest problem is that each kernel thread is assigned its own trace buffer -- this avoids any locking and contention issues -- and the interface works by allocating a big buffer (bigger the better) and snapshot it every now and again. Some kernel threads will have done little, some will have wrapped round the buffer and lost trace points. It is hard to get 100% coverage of all probe invocations, and it is better to rely on statistical sampling of the data where possible.

    There exists a GUI tool for viewing the trace data, but it has never been approved for external use and general distribution. Our group is working towards that goal, but no promises or timescales...


    Over the past few months I have been monitoring the performance of a series of SPARC20 workstations operating with a single CPU @ 60MHz. During this time I have used many of the parameters suggested by you in your Sun Performance and Tuning book and the notes in the Performance Column in the Sun home page. I have a question with regard to the time period to use to declare some of these parameters out of bounds or red in your rules. The specific parameters are: memory scan rate (sr), smoothed round trip time (srtt) and disk service time (svc_t). I know that you recommend collecting statistics on the sr and svc_t parameters for 30 secs with vmstat and iostat. When sr>200 pgs/sec or svt_t>50 your rules declare a red condition. My question is: should we wait 2, 3 or more (how many?) time steps (each 30 secs long) before declaring a red condition rather than the first occurrence of threshold violation? My experience with the collected data indicates that usually transients last more than one 30-sec sample but settle down in 2 or 3. What is your opinion on this? Would you use the same criteria (i.e., several steps of 30 secs each) for srtt>50 ms? Thanks,

    --Martin B. Hidalgo, (firm indeterminate)

  • Adrian Cockcroft responds:

    You raise a good point. One effect of waiting for several intervals to "confirm" a problem is that the latency increases. In some cases you want immediate warning of a possible problem. For example the zoom/ruletool display. In other cases you might want to only indicate serious problems. The other monitoring scripts implicitly do this by using longer time intervals. percollator uses 300 seconds. The idea for virtual_adrian is to be sensitive, insistent and somewhat annoying, it is also a bit like watching the output of sar -A while filtering out all the stuff that is not interesting.

    One of the routines I use to process percollator output looks at the states. It counts the number of each state per day, and displays a bar chart. The amber/red/black sections show up well.

    I'm thinking about extending the rules. Based on your suggestion I might add some counters to each rule. They would count the number of times each state occurred, and the run-length in the current state. The extra overhead would be quite low.

    The run-length could then be checked by scripts that use the rules, and delayed warnings would de-bounce transients in the amber and red states. The black state normally needs immediate action.

    Another approach would be to set the update rate for each rule separately. Right now the rules only get evaluated when the timestamp changes, this should be a controllable interval, not one second.

    I'm not sure when I'll finally get around to working on this. If you feel like adding code to the tools (virtual_martin ?) to do this then please let me know how well it works.


    Hi, firstly fan mail. The Porsche book is a great read, and your ongoing articles/tips etc are great, practical and down to earth, and as a result helpful. (but I'm sure I'm not alone in thinking this...).

    nyway, I've been using various tools (including zoom and virtual_adrian from the SE kit) to look at the load on our mail server. (BTW I'm *not* asking you to solve my problem, I'm providing the following as background in case it's relevant to my question).

    It's a SS20/712 with 128Mb memory running Solaris 2.5. It's running in a NIS+ domain with around 12,000 users registered, although only 2-6,000 are actually using this machine (probably under 1,000 regulary). Access is mainly via POP for reading mail, although I do export /var/mail to a couple of general usage machines for mail access via NFS. Incoming mail is via SMTP (with sendmail 8.7.5). We use aliases a lot via a local Berkeley DB file, and also run some large (500 user) mailing lists with a reasonable volume. Incoming messages are around 5,000 messages a day.

    The filesystem layout is

    Filesystem            kbytes    used   avail capacity  Mounted on
    /dev/dsk/c0t3d0s0      19047   10307    6840    61%    /
    /dev/dsk/c0t3d0s6     240055  169116   46939    79%    /usr
    /dev/dsk/c0t3d0s3     586847  351659  176508    67%    /var
    /dev/dsk/c0t3d0s5     336863  116836  186347    39%    /opt
    /dev/dsk/c0t3d0s4     644007  165816  413791    29%    /var/spool
    /dev/dsk/c0t1d0s7    2042861  971578  867003    53%    /var/mail
    /dev/dsk/c0t2d0s7    2042861  663291 1175290    37%    /spare
           0. c0t1d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
           1. c0t2d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
           2. c0t3d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
    corinna:/var/log# swap -l
    swapfile             dev  swaplo blocks   free
    /dev/dsk/c0t3d0s1   32,25      8 262952 235424
    /var/swap/swap00      -        8 204792 177360

    Overall the tools say the machine is coping well with the load, and the users are not complaining yet either. However, monlog spends a fair amount of time complaining about the disks being in state amber or red, and suggests moving load from busy disks to idle disks. now /spare is intended for that (at the moment it is completely idle). The thing that has me confused is that zoom says c0t3d0 is the really busy disk, not /var/mail. I was going to stripe /var/mail across t1 and t2, but this won't fix the problem reported by zoom.

    The question is are there any tools for identifying disk activity down to the filesystem level, rather than just to the disk?

    I could just randomly assign a filesystem to another disk and see if the load follows this, but this is a production machine and I would like to keep downtime to a minimum. The other shotgun approach would be to stripe everything other than /, /usr and /opt across a couple of disks and hope that things even out, but I'm not happy with this either.

    I'm not sure if it is logging into /var/log/syslog, swap (unlikely), paging in of binaries (also unlikely I think), or /var/spool/mqueue (IMHO most likely) that is the problem.

    Any ideas, or pointers to tools, docs, etc. would be appreciated,

    --Michael Henry, IT Services (Systems), University of Tasmania, Australia

  • Adrian Cockcroft responds:

    We understand the problem, and in Solaris 2.6 there should be per-partition disk stats available from iostat. In the meantime your guess that /var/spool is the culprit looks likely. My general recommendation for mail servers is that you need to have a way to decouple synchronous writes. This can be done using a PrestoServe NVSIMM or SBus card, using a SPARCstorage array with NVRAM fast writes, or by configuring a log based filesystem with either Veritas VxFS or DiskSuite's UFS metatrans.

    If you can beg/borrow/steal an NVSIMM for long enough to see if it fixes the problem thats the most efficient solution. It will also speed up NFS and POP mail users who hit the "save changes" button and rewrite their mail files.

    Finally, watch for the latest (rev -05?) kernel patch for 2.5, there is a ufs flushing bug that can cause filesystem corruption on reboot/crash, the workaround is to call sync from cron every few minutes.

    Client/server column feedback


    In our projects of distributed computing, I find that RPC is not the lowest common denominator, in fact it adds an extra layer of fat to plain old TCP/socket communication. I think it is much easier to program with sockets Using RPC is like using a fancy electric kitchen contraption to peel one carrot -- it is much easier to use a 89-cent hand-held peeler.

    I enjoyed the article anyway.

    --David Boosalis, (firm indeterminate)

    Connectivity column feedback


    While reading your article about the real meaning of the intranet you made mention about "proprietary systems" and specifically mentioned the AS/400. Now, wouldn't that term apply to all the hardware vendors since the meaning of the word says,"Owned, made, and sold by one holding a trade mark or patent" (WEBSTER's II 1994 Dictionary)?

    As you can tell, I am a user of AS/400's and if the article was on regards to Intranet Servers, Internet Servers, TCP/IP protocol implementations, Client/Server development and the suitability of the AS/400 for these environments, I would agree that the current implementation of the OS/400 does not make it the an ideal choice. But with the recent release of the OS/400 V3R2, seems like IBM is trying, "Really hard", to get in the game.

    But again, It will help me a lot to better understand your article if you would further clarify the meaning of the word proprietary within the content of your article?

    Thanks for your attention to this matter.

    --Kelvin J. Arcelay, Fruit of the Loom

  • The authors respond:

    "Owned, made, and sold by one holding a trade mark or patent" sounds about right to us.

    Solaris shareware top sites story

    I was referred to this article by a Sun Microsystems Systems Engineer in hopes of finding out if its possible to find a printer driver out there that will connect an HP DeskJet 1200 C/PS to a Sun SPARCStation 20 running Solaris OpenWindows.

    I'm working on a project in the Geology Department at William and Mary in VA. Any advice you can offer will be greatly appreciated.

    --Bryant H. Cafferty, (firm indeterminate)

  • Erin O'Neill respond:

    First off you'll want to go over & check the Sun Manager's Archive of Summaries (from the Sun Manager's mailing list, you don't need to be on the mailing list to search & look at the archives). It's over at: http://aurora.latech.edu/sunman-search.html

    You'll want to add a few words to search on, I searched on HP printer driver & I found a couple of summaries with HP 1200 c/ps printer questions/solutions. I'm enclosing one of the summaries that seemed pretty good. You may need to contact HP if none of these works for you.

    Subject: SUMMARY: HP DeskJet 1200CPS
    From: quest!franc@greece.sun.com (Franc Foskolos)
    Date: Fri, 21 Jul 1995 08:34:24 +0300
    Original Question was:
    >Hi all,
    >I just go a HP DeskJet 1200CPS (color postscript inkjet) with tcp/ip capability.
    >At first I have no drivers for Solaris 2.4 .
    >And as second I'd like some tips on how to set it up, on the Solaris 2.4 side
    >Are there any recipes out there for my illness ?
    >Thank's a lot
    >Franc Foskolos
    Answers received by and many thanks to:
    From: gregsun.crrel.usace.army.mil!u2is9gef@sunmed.uucp (GREGOR E FELLERS )
    You should have gotten a copy of jetadmin with the
    printer. That is what you need and it will work
    with all of HP's printers.
    The deskjet printer has a default address of
    you can:  route add host   youripaddress 0
    this will open a connection to the printer that you can use to set
    the ip address and net mask for your domain.
    then use jetadmin to install the printer on your network.
    From: Access.COM!tim@sunmed.uucp (Tim Wort)
    Assuming you have a PS version no drivers (application level) are needed, it is just another PostScript printer.  The jetadmin utilities for the TCP/IP connect
    Hope this helps..
    From: cs.rochester.edu!bukys@sunmed.uucp
    Maybe you need to buy J2375A "JetDirect software for SunOS", or maybe
    you can pick it up for free from ftp://ftp-boi.external.hp.com/
    From: Henry Unger <hitech.com!hunger@sunmed.uucp>
    Assuming you have a JetDirect card, you can get the drivers from
    From: Mike Blandford <truman.lanl.gov!mikey@sunmed.uucp>
    From: uniq.com.au!Kevin.Sheehan@sunmed.uucp (Kevin Sheehan {Consulting Poster Child})
    shouldn't need any - as I recall lpadmin knows how to deal with postscript
    1) add printer into /etc/hosts (say as "post_printer")
    2) run lpadmin:
    lpadmin -p ps1 -D "HP 1200CPS Color Printer"  -s post_printer -I postscript
    something like that anyway.  This is from rather fuzzy memory.
    From: wiwi.hu-berlin.de!thomas@sunmed.uucp (Thomas Koetter)
    You can find software for HP printers under their WWW-Server
    We have JetAdmin under Sol 2.3 for an HP 4m with tcp/ip and it
    works fine.

    Security columnist slurs Linux?

    The following quote from http://www.sunworld.com/swol-07-1996/swol-07-security.html was posted to the java-linux mailing list.
    "Next on the list is break-ins of Linux machines. These machines are commonly installed by non-professionals (read: hackers) and may be lacking in their security configuration. Why is this of concern to system administrators of Sun machines? Chances are, you have Linux machines co-resident on your networks. Are you trusting these machines? Would a broken Linux machine that is running a packet sniffer capture passwords to important computers on your network? If so, it is best to get those Linux boxes under control."
    This is a blatant stereotype of not only linux itself but also of its users. Linux' security is on par with that of any other flavour of unix. This is akin to me saying that "all second-hand car owners are poor and therefore more likely to be drunk at the wheel so puncture their tires whenever possible." Any security violation that could take place on a linux box could take place on a SPARC, SGI, BSDI or AIX box. I hope this ugly bias was merely an oversight.
    --(author and firm indeterminate) Editors:
    A very nice statement, I guess you want to say that Windows 95 machines are much safer, then... Or is Microsoft someone you think is better not to hit while Linux is not Bill Gates and a thousand of lawyer ? A very nice person...
    --Davide, (firm indeterminate) Editors:
    I'm chagrined that anyone associated with Sun would stoop to distributing this kind of FUD. As a long time user and system administrator of Sun and Linux boxes, I'd like to offer a few points for your consideration:
    1. The average technical quality of the Linux administrators I've dealt with has slightly exceeded that of the Sun admins I've encountered.
    2. The out-of-box security of Linux seems a lot stronger that that of Solaris.
    I wonder why you're not worried more about Windows boxes, or about a certain version of Unix that once shipped with '+' as the content of /etc/hosts.equiv.
    --Mike, (firm indeterminate) Editors:
    I am surprised that someone claiming to be a lecturer on system security would make such a pointed and misrepresentative statement. This statement is equally (or, perhaps, more so) true if you switch the word "Linux" with "SunOS". Just read CA-94:01.ongoing.network.monitoring.attacks and ask yourself, are more Sun's or Linux boxes susceptible to this attack and being attacked in this way? Working in an academic environment also, I see my share of security breeches and help deal with them. In my experience, it doesn't matter what operating system you talk about, a large percentage of system administrators are not trained as such, and their machines are vulnerable as a result. It doesn't do any good to single out one particular system, as this makes these "novice" administrators simply say to themselves, "I don't use Linux, so I guess I don't need to worry." I think you are doing them a disservice and are also painting Linux in an unfairly gloomy light. Please be more impartial and honest in your articles in future.
    --David Dittrich, (firm indeterminate) Editors:
    I am a 4-yrs SunOS and Solaris sysadmin turned into an Oracle consultant. I unpacked an Ultra-1 at 9am the other day, begun installing hard drives, CD, SIMMs etc, and did the OS/C Compiler/Oracle Webserver install done by 4.30pm. I run Linux RedHat 3.0.3 with 2.0.0 kernel and personally feel that there are as many Solaris or SunOS badly kept systems as Linux boxes in a network. Perhaps more. Apart from your strange perception of Linux as the break- in favourite route your article is good.
    --alessandro, (firm indeterminate)
  • The editors respond: Ah, the power of quoting out of context. Please revisit the entire quote to see what columnist Peter Galvin really said. Peter:
    I'm a consultant who does reseller training for Sun in the Middle East, Africa, and Mediterrean region. I'm conducting courses as part of the Competency 2000 Program and as such I deliver Solaris Training and training on FireWall-1 and SunNet Manager. I am now in the Sun office in Dubai and have been going through your articles for the last year and am very impressed with what you've written and I'm looking forward to incorporating some of what I've learned into my courses. I have spent a lot of time in South Africa and am about to embark on a tour of the Middle East. In both of these areas security is of paramount importance. Both areas are looking very carefully at the SunScreen product and have asked me lots of questions about it and encryption in general as part of my courses. There is a LOT of confusion outside of the United States as to what is and what is not allowed in terms of encryption and remarkably, even though there are competing products from Israel, Russia, and even South Africa itself for encryption, everyone wants the US versions. I would love to see an article on this issue and the impact on International markets. With the lack of telecoms infrastructure in some countries and high costs in others, many companies are looking at the Internet as the exchange medium between offices around the world and are VERY concerned about how that data will be encrypted and routed before it actually gets to its final destination. This whole issue is very big but the am ount of confusion and misinformation is very high indeed. Anything you can do to clear this up would be greatly appreciated.
    --David Drinkard, Sun Training Consultant
  • The editors respond: Peter reports he's hoping to write a column on encryption soon.

    Reader support dept.

    I work on multiple platforms, and especially enjoy the various articles on system admin issues. Regardless how long we've been doing system admin/maintenance/performance tuning, etc. there are always better ways of doing things, new ideas, etc. Thanks !
    --Dave, (firm indeterminate) Editors:
    I have recently acquired a Sun 4/260 without documentation.I have expirence in IBM compatables but only rudimentary knowlege of Sun systems.Any suggestions as to the possibility of acquiring support information (i.e., tech books, manuals, ect.) would be appreciated.
    --Newbie, (firm indeterminate) See the SunExpress home page for Sun-related books and manuals.

    Stop the ads!

    I would be grateful if you could convince your advertisers NOT tu use animated images because Aside from this problem, I am enthusiastic about your net publication. It is really useful for my everyday work (e.g., Mr. Cockroft's columns) and for keeping informed about new products.
    --Reiner Hammer, (firm indeterminate) Editors:
    Reading your magazine via WWW is becoming a chore. Your layout is so heavy with nonsense graphics (like little cdroms blowing bubbles) that reading the text is a pain. The client browser becomes too occupied with running subprograms, fetching new graphics etc, that even sliding the scrollbar down in an orderly fashion to read the actual textual material, is barely possible. I strongly suggest you also pay attention to balance of textual material to nonsense graphics. Some of your readers focus on that part.
    --Henk, (firm indeterminate) Editors:
    The recent introduction of animated ads to SunWorld Online has forced me to start not loading images when I visit your site. The articles are always entertaining and informative, the ads WERE unintrusive enough to leave image loading turned on until you started the animated images. Try to limit the use of this form of advertising as much as possible.
    --Mike Gruen, (firm indeterminate)
  • The Publisher responds: Responding to feedback from many readers, we've taken steps to limit multimedia features in our advertising to reduce the more annoying extremes. We are busily working with our advertisers to convert multimedia ads to meet these specifications; please bear with us during this transition.

    Connectivity questions and answers

    Just finished reading your article. I got lost. When you said, that Intranets were "an advancement of the technology (client/server), you didn't explain what you meant by that. How is an advancement? Could you create a table and on one side write client/server and the other side write Intranet, and start listing the differences? Maybe this would help sort it out. I see the distinction you're making between Internets and Intranets, but when you get down to client/server versus Intranets, I get lost. I call Intranets an evolution of the client/server paradigm. However, with C/S you are stuck in a proprietary mode, enslaved by one single vendor. Also, you get stuck with a Network Operating System (therefore the great Unix, Intel wars). C/S requires some pretty heavy programming skills; Intranets require some easy HTML programming and some PERL scripting, which isn't too difficult. Interactive forms, browsers as communication interfaces, seamless e-mail programs and plug-ins developed by various vendors also make Intranet technology unique. Correct me if I'm wrong, but I've been looking for some way of explaining how client/server technology is the father of the Intranet, but that the son has more functionality, programmability, accessibility. What's your opinion?
    --name and firm indeterminate
  • The editors respond: Readers?

    If you have problems with this magazine, contact webmaster@sunworld.com
    URL: http://www.sunworld.com/swol-08-1996/swol-08-letters.html
    Last updated: 1 August 1996

    Click on our Sponsors to help Support SunWorld

    What did you think of this article?
    -Very worth reading
    -Worth reading
    -Not worth reading
    -Too long
    -Just right
    -Too short
    -Too technical
    -Just right
    -Not technical enough

    [Table of Contents]
    Subscribe to SunWorld, it's free!
    [Next story]
    Sun's Site

    [(c) Copyright  Web Publishing Inc., and IDG Communication company]

    If you have technical problems with this magazine, contact webmaster@sunworld.com

    URL: http://www.sunworld.com/swol-08-1996/swol-08-letters.html
    Last modified: