|
Stretching the network
VLANs promise a local data superhighway.
|
There is a great deal of talk about virtual LANs and how they can address the support and segmentation requirements of today's enterprise. (There are even a few working products.) This article discusses the pros and cons of virtual LANs, their scalability, and other networking options.
Mail this article to a friend |
When Robert Metcalfe invented Ethernet in 1973, he pegged maximum LAN speed needs at 5 Mbps per second. To provide a safety factor, he doubled the speed to 10 Mbps to handle unforeseen bandwidth-intensive applications. (We offer a sidebar covering pre-LAN networking history.)
Ethernet served distributed computing well. But today, as droves of users ditch dumb terminals in favor of high-powered Unix clients and Pentium PCs to take advantage of new network-based apps, the once-inexhaustible 10 Mbps Ethernet pipe shows signs of congestion. In addition, the router, the axe of choice to connect burgeoning LAN segments, is running into latency (delay) and port-density bottlenecks.
What's a poor network manager to do?
Competing to stand in for Ethernet are several incompatible choices:
|
|
|
|
What's a VLAN?
A Virtual Local Area Network (VLAN) is not a LAN viewed
through a virtual reality helmet. It's a high-speed, low-latency
broadcast group that unites an arbitrary collection of end-stations on
multiple LAN segments connected through layers 1, 2, or 3.
Physical-layer VLANs are simple examples. These VLANs consist of software-settable ports on a concentrator that are grouped
into arbitrary collision domains. Since moves, adds, and changes are
frequent in the turbulent-sizing 1990s, this can save installation and
support dollars for wiring changes. The beauty of physical-level
virtual LANs is that users can be connected, regardless of physical
location, on a logical basis.
For example, accountants can all be connected over a virtual LAN,
even though they may work on different floors. This straightforward
technology offers big local benefits, but does not scale-up to address
the requirements of large networks.
Link- and network-layer virtual LANs extend the concept of the VLAN
by adding internetworking components to the picture. Ether-switching
devices that operate at the link layer provide a way to extend the
virtual network concept to more than a single concentrator.
With an average port cost of $800 to $1,900, Ether-switches offer an
inexpensive way to add capacity by lowering the number of devices per
segment. VLANs based on Ether-switches segment LANs to a point where a
single device could have a "private" Ethernet for maximum throughput.
By cascading standard 10BaseT concentrators off each of the switch
ports, the multiport configuration of the Ether-switches is well-suited
to connecting the increased number of LAN segments and users. The
flexible software port definitions allow any combination of the
switch ports to be combined in a "bridge-group."
The bridge group defines the boundaries of the broadcast domain
while, at the same time, maintains unique collision domains for each of
the switch ports. The following diagram illustrates the independence
of hub or switch port connections in a workgroup.
The better, network-layer Ether-switch
The Network-layer Ether-switch combines routing and switching
software. The standard routing algorithm, RIP, passes traffic between
"Route-groups," but IP packets are forwarded through the network-layer
switching engine once the routes within a Route-group are learned.
The concept of "virtual subnet" arises from switched connections
between different IP subnets. The benefit is that existing addressing
schemes and connectivity preferences can be integrated into the switch
configurations. A speed benefit is realized over link-level
Ether-switches when multiple IP subnets are defined within
a single route-group, since edge routing is eliminated. For example:
FDDI: A promised LAN
In the early 1990s, some organizations purchased Fiber Distributed
Data Interface (FDDI) as a high-speed backbone for conventional
router-based networks. FDDI provides a solid 100 Mbps backbone, but
its acceptance was slowed by its high price. Even as FDDI got
cheaper, many network consultants avoided it for its lack of support
for multimedia. Today, FDDI continues as a strong contender for
backbones.
FDDI's advantages
Two more 100 Mbps alternatives
The introduction of two 100 Mbps "Fast Ethernet" proposals has driven
down the cost of other high-speed alternatives. While Fast Ethernet
products have been fast to the market and sport attractive prices,
competition between the two draft specifications dampens buyer
enthusiasm.
The 100 Base-X specification has strong ties to
it's Ethernet roots, and maintains the original framing format and
CSMA/CD contention access scheme. Although the 100 Base-X
specification supports Category 3 Unshielded Twisted-Pair (UTP)
wire, the full-duplex feature of this specification is
reserved solely for Category 5 UTP and fiber optic cable.
The major limiting factors of 100 Base-X are the 210-meter total
network distance limit, availability of protocol drivers, and a
contention-based access mechanism that will not allow 100 Base-X
segments with multiple stations to reach full 100 Mbps capacity.
The other Fast Ethernet specification isn't really Ethernet at all.
The 100-VG AnyLAN protocol supports both Ethernet and
Token Ring framing formats, and is based on a new deterministic Demand
Priority Media Access (DPMA) access mechanism. The new
reservation-based access scheme allows for full 100 Mbps loading and
prioritization of delay-sensitive data, such as audio and video.
Although this specification supports a total network distance in
excess of 2,000 meters, the requirement for four pairs of Category 3,
4, 5 UTP may cause headaches for sites with older cable. It is likely
the alternate specification of two pairs of Category 5 UTP or Category
1 STP will garner better support.
Fast Ethernets' incompatibility haunts both. If the pre-10BaseT days
offer any lesson, it is that any Ethernet proposal with the slightest whiff
of "proprietary" to it doesn't stand a chance with network managers.
ATM: The new, improved promised LAN
With all the hype surrounding ATM (Asynchronous Transfer Mode), it
might be called the Wonder Bra of high speed networking. After William
Seiffert made his millions at Wellfleet, he left and formed Agile Networks, an ATM switch
manufacturer.
To paraphrase Seiffert as to why he formed Agile, "Remember
those guys who told you the router was networking nirvana, layer 2
bridges were passe and could not scale to meet corporate networks
requirements, and that everything should be handled at the network
layer? Well, in the brave new world of ATM, you have to get rid of
most of those routers and go back (to the future) of switching at layer
two."
ATM differs from the other high-speed networking alternatives, in
that it deviates from the shared media model discussed thus far in
favor of a dedicated switch fabric. ATM cells differ from LAN frames
in that they are defined as short, constant, 53-byte data units. The
short, fixed cell size allows ATM to carry all traffic payloads,
including voice, video, and data over connection-oriented or
connectionless circuits.
The cell concept contrasts with the variable-size LAN data units and
has distinct latency, dynamic bandwidth allocation, switch
architecture, and payload transparency benefits.
As a means to bridge the gap between fixed-cell and variable
frame-based communications, the Segmentation And Reassembly (SAR)
function was added into the ATM specification. SAR breaks the
larger LAN frames into ATM cells at the transmit side, and reassembles
the LAN frames at the receive side of the switched network. This allows
integration into existing networks transparently -- or maybe not.
An ATM switch provides a very scalable backbone architecture with
latency an order of magnitude lower then that of today's predominant
campus design -- the collapsed-router backbone network. In addition,
interfacing with the non-ATM portions of the network poses real
integration issues. Network administrators must find ATM interface
cards for routers, LAN protocol drivers for server NICs, and the time
to learn switch-based dynamic routing.
Like other networking issues, ATM's problem is the number of
standards, rather than the standard. In a test
conducted by the European Network Lab earlier this year, a lack of
uniform standards to a network comprised of a single vendor's
switch. Interoperability between switches proved impossible. Other
ATM documents that should be researched before selecting ATM components
are:
Support requirements
Implementing virtual LANs means changing the rules for network and
system administrators. In the pre-virtual days, there was usually a
notification and a work-order for recabling people to different
segments. In the virtual world, are made with a mouse click.
Unfortunately, this change may not be integrated with the network
and systems designers. It is necessary to synchronize the layout of
your servers and clients so that they mesh and maximum availability is
provided to all users. The power of virtual networking is to
define the bounds of the virtual network and avoid broadcast overhead.
For instance, it is useless to use a VLAN to access a
single server (be it mainframe, VAX, or Unix cluster) if that
server is accessed by multiple points throughout the enterprise.
Likewise, it is important to integrate the process -- not just the
technology -- to reap the benefits of the virtual approach.
The network management side is not pretty. A nifty protocol, SNMP
dominates todays network management landscape for distributed devices.
Unfortunately, SNMP was developed to manage physical network devices --
not VLANs. It is not a simple task to retrofit a virtual reality
helmet on SNMP.
SNMP's developers recognized its shortcomings, and responded with
something complementary -- the Remote Network Monitoring (RMON)
protocol. RMON looks at LAN segments (that's the good news) and can
monitor the health of groups of network devices. Unfortunately, what
RMON needs is the ability to dynamically manage each virtual port of a
virtual physical-layer switch. CPU-wise, this is not cheap. VLAN
concentrators are in the business of moving packets, not analyzing
them.
Network management for ATM is even uglier.
Joe Head, vice president of engineering for Optical Data Systems, an ATM developer,
says, "SNMP is to managing ATM like an abacus is to solving a linear
programming equation -- brain dead!" Today's SNMP managers have
problems managing hundreds of events -- let alone correlating those
network events. In the world of ATM, networks will support several
thousand users.
The ATM Forum, a standards
and promotions group dominated by vendors, has only begun to deal with
congestion control and ATM's AToM MIB integration with SNMP. One other
point: There are also no testing devices for 155 Mbps and faster
speeds. Remember, you cannot manage what you cannot measure.
Where we go from here
Virtual LANs and the accompanying switching options have the potential
to facilitate the logical grouping of end users across campuses and
perhaps even the wide area. (ATM as a virtual WAN is a topic for
another time.)
However, it is first necessary to define your problem and the
boundaries, evaluate the alternatives and their scalability, set up a
test laboratory, and define the cost (new equipment, testing, new
network management software, tweaking or redoing the addressing
system).
Before embarking into the realm of virtual networking make sure you
quantify your requirements, research the alternatives, select
standards-based products, and ask (no, demand) an explanation
from the vendor on how they plan to manage their devices.
While VLANs promise much, the standards and products are still
young. Don't look at virtual LANs through a rose-colored virtual
reality helmet.
|
Resources
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-08-1995/swol-08-ethernet.html
Last modified:
In the 1980s, IBM's mainframe-centric Systems Network Architecture (SNA) ruled the roost. The mainframe off-loaded communications to the front-end processor, which then polled cluster controllers for character-based communication with applications on the big box. Traffic was predictable, bandwidth requirements were small, and point-to-point 9.6 to 56 Kbps lines were more than adequate to handle the monolithic SNA network requirements.
In the early 1980s, Xerox, Intel, and Digital combined to produce Bob Metcalfe's Ethernet (as opposed to IEEE 802.3) The local area network was born, and corporate divisions had an alternative to centralized, hide-bound MIS for their applications.
At the same time, research at Stanford University produced two companies that would change the world of computing: Sun Microsystems and Cisco Systems. Sun decided DEC had the right idea with the VAX and distributed computing, but did not press the idea far enough. Sun sold powerful workstations that implemented Network File System (NFS) and Remote Procedure Call (RPC) technology on top of Unix to allow computers to communicate across far-flung networks: The distributed computing model's birth.
Cisco founder Len Bosack decided to take a Sun processor and convert it into a "gateway server (router)" in his garage. Unlike SNA, which used a cascade of point-to-point connections to maintain a session at all costs, routers were optimized to handle a new protocol, Transmission Control Protocol/Internet Protocol (TCP/IP), from the Department of Defense. TCP/IP uses a "best effort" delivery of data packets over dynamically discovered routes.
The introduction of TCP/IP and the router was coincident with a mass migration of carrier-based private line networks to highly reliable fiber optics.
With the rollout and interconnection of LANs mushrooming, and something called client/server computing beginning, there was a whole new model for the network. Instead of terminals sending character-based traffic over 9.6 or 19.2 kbps multidrop SNA links, workstations and PCs were sending large data files randomly and expected the network to deliver, now.
Which brings us to today. Sun Microsystems recently resurrected its mid-80s advertising slogan "The Network is the Computer." Eric Schmidt, Sun's chief technical officer, says the rush of bandwidth overabundance overturns Moore's Law (microprocessor performance doubles every 18 months) as computing's key driver this decade.
About the author
Frank Henderson is a principal at The NetPlex Group. He has experience in designing and installing networks, reengineering help desks, ORB, distributed databases, and network management. Henderson has served on SunWorld Expo's Advisory Panel for the last two years.
Reach Frank at BIO frank henderson@sunworld.com.