Click on our Sponsors to help Support SunWorld
Connectivity by Rawn Shah

Building a reliable NT server, Part 1

The best decisions on hardware and peripherals to prevent NT from living up to its reputation

SunWorld
January  1999
[Next story]
[Table of Contents]
[Search]
Subscribe to SunWorld, it's free!

Abstract
You, the Unix guru, have just been asked to build a network of NT servers and integrate them with your Unix systems. What do you do? How do you create a stable NT system that doesn't crash every other day? In this multipart series, Rawn explores techniques for creating stable NT environments. Get started this month with tips for choosing the right hardware platform for your NT servers. (2,800 words)


Mail this
article to
a friend
Unix vendors and administrators claim that NT servers are a total terror when it comes to network management. The most common complaint essentially encompasses the rest: NT servers are unreliable and have system crashes on a daily basis.

A combination of hardware, software, and administrative problems have created this image of NT. PC server hardware is just starting to come out of the murky days of 8086 compatibility and IDE drives. The NT operating system is becoming more robust and has a reasonable set of reliable device drivers. Further, NT administrators are now being tested more thoroughly, leaving behind the days of "paper"-certified NT administrators and engineers.

The changes of the past two years warrant a thorough, up-to-date review of the way NT servers are built and managed. As a Unix administrator, you may or may not be familiar with exactly what's needed to build a reliable NT system. As a novice NT administrator, you may have the same problem.

It's time to crawl out of the pits and bogs of mucky servers and sloppily managed systems. My next few columns are going to focus on how you can build an NT system to be proud of. For this to work, you'll need to leave your biases behind and join me in making a fresh start. After all, your boss or CIO only wants to hear how you're going to solve the problem.

In the next several months we're going to focus on the following core issues:

  1. How to choose an appropriate server hardware system
  2. How to choose PC hardware vendor and a service contract
  3. How to prepare your NT server for your environment
  4. How to create a balanced NT user environment
  5. How to manage shared resources
  6. How to secure your NT server
  7. How to integrate your NT and Unix servers
  8. How to establish a mixed-network management strategy

The IT manager has to take the first step: deciding which server to buy and which vendor to buy it from.

What to buy
At least three opposing factions have their hands in the buying decisions at any one time: the IT staff, the administrative staff, and the vendor sales representatives. The IT staff wants systems that it trusts to run well and cause the fewest problems. The administrative staff wants the server with the best price-to-performance numbers for the task it will undertake. And sales reps just want you to buy their deal on the latest or best models.

We'll start by tackling the issue of which server to buy. This task is largely a function of how the server is going to be used, the number of users, the applications to be installed, and the demands of the network environment. To simplify the issue, let's group the systems into three categories: the workgroup, the department, and the enterprise.

A workgroup server is intended to provide basic functions and common applications to a small group of users, somewhere between 5 and 25; a departmental server should work for a larger population of 25 to 100 users; and an enterprise server should be able to handle between 100 and 500 users.

Why did we cap it at 500? Essentially due to the memory limitations of existing servers. Top Intel-based servers today are mostly limited to 4 GB of RAM. If all 500 users were on at the same time, this would give each user approximately 8 MB with which to run applications. A more likely environment, in which two-thirds of the system's total users (about 330 users) were actively on the server at any given time, would give a more livable 12 MB per user. If your needs exceed this cap, you may want to consider the a Compaq/Digital Alphaserver; the largest model, the 8400, can handle up to 28 GB of RAM, supporting roughly 2300 users at 12 MB each.

The table below shows systems configured to support the three population sizes: workgroup (5 to 25 users), department (25 to 100 users), and enterprise (100 to 500 users). It assumes each person on average uses 12 MB of RAM and 500 MB of disk space. The price range is an indication of prices for each scenario based on minimal and maximal configurations for each model. The prices are based on averages of those available from several leading vendors on December 1, 1998.

Server type

Processor type

Num. of CPUs

Memory

Storage

Price range

Workgroup

333-MHz to 400-MHz P II

1 to 2

128 MB to 384 MB

4 GB to 13 GB

$3,000 to $8,500

Departmental

400-MHz to 450-MHz P II

2

256 MB to 1 GB

13 GB to 54 GB

$6,500 to $16,000

Enterprise

400-MHz to 450-MHz P II Xeon

2 to 4

512 MB to 4 GB

54 GB to 260 GB

$14,000 to $80,000

Table 1: Typical server model scenarios

The table above makes picking out a server model look deceptively simple. The prices for servers change very slowly compared to those of desktop systems, so a price you pick today will probably be the same in six months.

In fact, Intel may have something of a dilemma there. Most servers today have CPUs powerful enough to perform as necessary. As such, increasing CPU speed is losing its value despite the regular price drops from Intel. Expect server price-to-performance configurations to remain level for the next year or two.


Advertisements

The catch: Server features
When you step up to a vendor, however, you'll be hit with enough features and functions to rattle your brain. The following table illustrates several important components to factor into your purchasing decision.

Server type

Power supply/ cooling

RAM type

Drive types

System bus

Management

Workgroup

300 to 600 W

ECC SDRAM, ECC EDO RAM

Single or dual

Ultra2/3 Wide SCSI, Ultra SCSI, RAID card, standard or hot-plug

3- to 5-slot 32-bit PCI

External Management port (EMP)

Departmental

400 to 1200 W N+1 redundancy, redundant fans, CPU fans

ECC SDRAM, ECC EDO RAM

Dual Ultra2/3 Wide SCSI, RAID card, standard or hot-plug

3- to 5-slot 32-bit PCI, 2- to 3-slot 64-bit PCI

3- to 5-slot 32-bit PCI, 2- to 3-slot 64-bit PCI

EMP, Fan/power supply monitor, thermometer

Enterprise

600 to 1200 W N+1 redundancy, redundant fans, CPU fans

ECC SDRAM, ECC EDO RAM

Multiple Ultra3 LVDS Wide SCSI, Fibre-Channel, Multiple RAID cards, hot-plug

3- to 9-slot 32-bit PCI,
2- to 6-slot 64-bit PCI

EMP, various monitors (Fan, voltage, thermometer, humidity, RAM errors, CPU activity)

Table 2: Server features you need

If the CPU is the heart of the server, the power supply is its lungs. Departmental and enterprise servers these days provide redundancy in the form of triple power supplies. This allows you to remove one unit and send it back to the vendor and still maintain basic redundancy with two power supplies. All power supplies should be connected to separate power outlets or uninterruptable power supplies (UPS).

No number redundant power supply units in your box will save it from an outage. A UPS should be selected for the amount of downtime needed by the system and your users to stop all work and shut down properly. For low-end servers, the UPS may be a third or half the cost of your server, which you should take into account when pricing the whole system. At the high-end, the UPS may comprise one-tenth the server cost and should also be considered with the total cost. Smart UPSs keep an active monitor of the power line conditions and can send warning messages to the NT system or even page you via an external modem.

Synchronous Dynamic RAM (SDRAM) is used by most servers because it is tied to system bus clock cycles and, thus, allows faster access than Extended Data Out (EDO) RAM. However, EDO memory, with four-way multiprocessor systems, has better performance than SDRAM, placing accessibility over speed. The enterprise class servers with Xeon processors use 4:1 interleaved EDO RAM. The data is spread across four memory chips, which allows the system to access more memory at one time. However, this interleaving also means you have to buy the memory in sets of four DIMMs (Dual Inline Memory Modules). SDRAM, on the other hand, allows you to add more DIMMs if you have available slots. In other words, plan to buy as much EDO RAM for your enterprise class server as you think you will ever need, saving yourself from having to replace it later.

The disk system
In choosing system configurations, you should pay particular attention to drive types and management features. Most PC servers have gone to SCSI platforms, whether SCSI-2, Ultra2 SCSI, or Ultra3 SCSI. The basic disk provided by most vendors is a 4-GB Ultra2/Narrow SCSI drive running at 7,200 rpm. That serves the purposes of most small applications in terms of speed and throughput. Hot-plug drives become necessary at the departmental level as you start supporting large groups of people on different floors or in different buildings. In terms of cost, the amount of time spent manually replacing non-hot-plug drives simply costs more than the price of the drives themselves or the cost of upgrading a non-hot-plug system.

Workgroup servers are often simply full tower enclosures and can incorporate between 3 and 5 drives. Departmental servers can handle up to 10 drives. Enterprise class servers usually rely upon secondary drive systems to contain the drives and usually keep only a few open drive slots within the chassis. Vendors like Dell and Compaq provide such storage enclosures to house hundreds of gigabytes of data. You can also link several of these enclosures to the main server enclosure to reach a terabyte of storage.

As you go up the chain to 9 GB and 18 GB drives, and shift from 7,200 rpm to the blazing 10,000 rpm systems, the prices per disk double, triple, or even quadruple. The SCSI standard supports speeds from 10 megabits per second (Mbps) in narrow SCSI-2 to the latest Ultra2 Wide SCSI systems delivering 80 Mbps throughput. The newest generation Ultra3 SCSI will support 160 Mbps transfers once vendors start shipping products later in the year. In the meantime, to go even faster, you need to switch over to Fibre-Channel systems. Starting at speeds of 100 Mbps, Fibre-Channel brings you into the realm of storage area networks and distributed storage. These can store several terabytes of data in separate enclosures, each with its own independent RAID system. (See my September 1998 column on storage area networks for more information.)

PCI cards and buses
Most server systems don't need too many PCI cards. Features like SCSI and basic video adapters are often built directly onto the motherboard. In such a case, the only other cards that should go into your slots are RAID, Fibre-Channel, or network interface controllers. Keep in mind that you may need multiple RAID or network controllers as your server grows, and should therefore place them close together.

32-bit PCI at 33 MHz is standard on almost all PC servers these days. Some models at the low end may come with ISA or EISA slots as well, which you should leave unused. Some 64-bit PCI slots have started showing up in enterprise-class devices, although very few cards work with them. These cards can fit into and work in 32-bit slots (with part of the connector hanging off of the end of the slot) and the 64-bit slots can work with 32-bit cards as well, providing backward compatibility. High-end servers may also sport dual-peering PCI buses. This means the slot is wired to two separate PCI buses and PCI controllers, effectively providing a back-up path should one fail. The next generation may also feature 66-MHz PCI slots, effectively doubling the throughput from 133 Mbps to 266 Mbps. These will work at full 66 MHz only when all PCI cards on the bus operate at 66 MHz. If a 33-MHz card is plugged in, the others will also drop down to 33 MHz. A full 64-bit 66-MHz PCI bus can transfer at 532 Mbps. Intel and other PC vendors are also looking to push another generation of system buses, taking it into the gigabits-per-second (Gbps) range and putting it in competition with Unix servers.

Notice that I did not mention hot-plug PCI as an option for the system bus. Hot-plugging cards into the system can be very dangerous, especially if the card is poorly designed. Plugging a card into a standard PCI slot is difficult enough, without having to worry about power running through the system. Cards for Unix servers are usually designed with lock-in tabs and slide paths, making them easy to install. The majority of cards for PCs don't have anything of that sort; many don't even have standard form factors. This means that, depending upon the internal layout of your PC server, a full-length card may not be able to plug into just any PCI slot due to obstructions (yes, this still happens). Another thing to consider with a newly added card is that the system BIOS may rescan all interrupts and I/O addresses and reassign them if the card isn't an exact duplicate of the old one; even minor revisions between cards can affect the system.

One last note on the system bus: if your vendor is touting I20 in the server as a valuable feature, ignore it. The I20 (intelligent input/output) system has been in PC servers for well over a year, is almost unused, and is a dismal failure from a marketing point of view. The number of cards supporting I20 is small enough to render this system irrelevant. Wait for the next-generation bus technologies if you feel you really need something of the sort.

Hardware management
The hardware management features of a server are often overlooked. It's easy to mistakenly assume server management software tools will do it all for you. You need hardware monitoring devices on your servers to warn you ahead of time of any hazards to the system. If a component on the server is about to fail, you can plan for a hot-plug change or bring down the server as necessary. An ounce of preparation goes a long way toward saving time and money.

An external management port (EMP) is invaluable. On Unix systems, you can often hook a console serial port into a terminal server and manage the hardware system remotely. PC servers are now beginning to sport similar devices, either as specialized serial ports or built in modems. This approach allows admins to connect directly to the server from any phone line and manage the box or even reboot the system if necessary. Nothing is more frustrating than having to run someone to a box simply to power cycle it while the phone rings off the hook with calls from angry users. Make sure your EMP device can be attached to both a network and a private modem line (not part of the general dial-up user modem pools) and make very sure that a server password (different from your NT administrator password) is enabled on the port.

Finally, you often have the choice between pedestal-based and rack-mounted cases. Pedestal cases make sense for smaller workgroup-class machines and even for some departmental systems. In many cases, the environment in which they will work is not a climate-controlled network operations center, but more often a corner in an office or inside a closet somewhere. These environments place greater physical demands and wear and tear on these devices. Very often, they become dusty and have to be cleaned every few weeks. In some cases, the pedestal-based servers consume slightly more power for better cooling systems. A rack-mounted system is designed for industry-standard 19-inch racks, sometimes with slidable arms on each a side so they can be pulled out of the rack for inspection or maintenance.

Picking a vendor
The hardware configuration of a PC server is just the first step to creating a reliable NT server environment. Based on your user needs and budget requirements, you should be able to pick an appropriate configuration. Vendors hate it when industry watchers talk about their products as commodities, since they work so hard to differentiate themselves. However, the fact remains that most PC server products are commodities and, so far, differ very little from one another. In most server performance tests, models in the same category fall in very close to each other. Next year things may be different, with the release of new technologies like 8-way multiprocessor systems, Rambus RDRAM, Intel NG/IO, IBM PCI-X, and other improvements on server performance. I would advise you, however, not to rush into buying new technologies.

For now, the level of support and service varies widely from vendor to vendor. In the next installment in this series, we'll cover how to pick a vendor and discuss various vendor service and training options.


Click on our Sponsors to help Support SunWorld


Resources


About the author
Rawn Shah is an independent consultant based in Tucson, AZ. He has written for years on the topic of Unix-to-PC connectivity and has watched many of today's existing systems come into being. He has worked as a system and network administrator in heterogeneous computing environments since 1990. Reach Rawn at rawn.shah@sunworld.com.

What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough
 
 
 
    

SunWorld
[Table of Contents]
Subscribe to SunWorld, it's free!
[Search]
Feedback
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-01-1999/swol-01-connectivity.html
Last modified: