|
Unraveling asynchronous transfer modeCome with us to understand what makes ATM tick and how it can benefit you -- part one |
ATM connectivity is the future for computer networks. In this time of improvements in network technology, we need to have a grasp on how new network technologies work and more importantly, whether they will work for you. This article is our first immersion into the ATM networking pool. (2,000 words)
Mail this article to a friend |
Asynchronous transfer mode (ATM) technology is a recent addition to the world of computer networking. It has been determined as the future of computer networks and holds the promise of higher levels of connections both in the local area network as well as national and international backbones.
Emerging from the minds of telecommunications vendors, it takes a very different approach to interconnecting computers and data delivery. It was originally designed to carry Broadband-ISDN (B-ISDN) for digital telecommunications, but it also offered a way to integrate voice and data in a digital format through the same architecture.
There is a lot of noise surrounding ATM both from those applauding the technology and from those opposing it. Politics aside, however, it does provide a solution that can take wide-area digital data communications to multi-gigabit bandwidth -- a concept once only dreamed of.
I will first present ATM from a technical and architectural standpoint. This will help you understand what the differences and complications because ATM is so different from packet or token routing technology used elsewhere. And, yes, I will attempt to present the political debates surrounding ATM. Telecommunications vendors and telephone companies around the world are implementing this technology as their future backbone and there is little that can be done to avoid it; so it is best that you know what is happening in detail.
The goals of ATM
When ATM emerged from the work on B-ISDN in the mid-80s, it was
developed as a standard by the CCITT (Consultative Committee of the
International Telegraph and Telephone) for proposal to the
International Telecommunication Union (ITU). Its definition of ATM
is not particularly helpful but here it is anyway: A transfer mode
in which the information is organized into cells; it is asynchronous
in the sense that the recurrence of cells containing information
from a particular user is not necessarily periodic.
The goal was to create a wide-area digital transmission system that is not tied to any physical implementation or set speeds, or dependent on the type of digital data it carries. This would open the possibility of digital communications using the emerging technologies of optical fiber and wireless while still allowing the use of traditional copper-based connections.
If this goal is achieved, ATM will be able to carry any form of digital data; it is not limited to computer data alone and can carry digital audio/voice and video applications as well.
The data transfer rate would also be scalable according to physical resource availability or provider control. This allows different speed connections depending on the type of applications needed. The independence of the technology from actual physical interconnection mechanisms makes it ideal for interconnecting networks of different media types and makes the transition to a pure ATM-based network easier.
There is also a goal to integrate wide-area and local-area digital networks using a common technology. This technology works at the desktop level as well as at the backbone-network level. At the low-end, you can have 25-megabit-per-second desktop connections; the scalability that we described earlier has projected ATM speeds all the way up to 2.488 gigabits per second.
Finally, a business goal was to be able to account each and every unit of data delivered so that they could be identified and billed appropriately. Another goal was to provide the user a guarantee on the quality of service of the connection or resource. Unlike current packet delivery mechanisms, the circuit-based interconnect method of ATM identifies exactly the path that the data will take and hence can track the units of data along their flow.
|
|
|
|
Packet versus circuit technology
Communicating over ATM involves creating a circuit between the
parties involved. Think of it as a regular telephone call between
you and a friend. You dial a number, the telephone switch
establishes a connection, you talk, and then you hang up. These are
virtual circuits, meaning that the connection is not physically
hardwired between the two parties at all times. The circuit exists
only while the two parties are connected.
Traditional Ethernet, for example, uses a packet-based delivery technique. Rather than establish a fixed line between the two parties, the data is simply put into a packet and sent on its way on the wire with source and destination addresses encoded. The destination system looks for packets that are targeted for it and intercepts these. If it's going through intermediary networks, the routers that pick up the packet examine the addresses and other parameters of the packet and then pass it along.
There are several pros and cons of both techniques. With a virtual circuit you must make sure that a clear path is available from the source point all the way to the destination point. If there is a break at any point, the connection is lost and will not be available until a new path is established. In contrast, a packet delivery mechanism does not make any assumptions on the path between the source and destination. If a packet en route encounters an obstruction, such as a break, it will not reach the destination and simply be dumped. The source will not know if the packet was lost unless the destination acknowledges a missing packet.
To keep a circuit operational, all the switches in the path must know the existence and purpose of the circuit. This means that each switch has to be able to keep track of many circuits, their sources, destinations, and other parameters at all times. Routers keep track of destination paths but not on a per-connection basis. In order to find out where to send a packet, the router has to process the packet, find out the destination address and additional parameters, look up in its route table the next best delivery point (router or system), and then send it on its way. The processing overhead for each of the two methods lies in different points.
A circuit has to be established at the start of the connection, and a path must be determined. This start-up overhead involves informing all the switches in the path of the details required by this circuit: how large a circuit, if it is time-dependent, the maximum bandwidth, and so on. A packet switch doesn't need to know all this; each router along the way will figure it out for itself if and when it gets the data packets.
Since it does not monitor actual connections between two points, a packet switch normally cannot keep track of the time sequence of data packets. This has always been a sticking point for packet technology. The theory has been that if you make a packet switch fast enough, it will be able to keep track of temporal sequences of data; however, this has only worked with limited success. The circuit switch can keep track of time since there is a fixed path between the two points which has committed to the delivery of the data in proper temporal sequence as requested during the initial connection establishment.
It is easier to account for all data packets in a circuit because the connection information and the path is known. One of the biggest problems in the eyes of Internet service providers is that it is difficult if not impossible to account for how packets are delivered because of the nature of IP packet delivery through Internet routers. In large network businesses, accountability means possibly saving millions of dollars. Even though it is true that many business do not place as great a deal of importance on this fact, the major carriers (local and long-distance) have serious concerns when it comes to carrying such traffic.
You can make a packet-based system act like a virtual circuit using some smarts at the end-points. If you assign sequence numbers to packets that are agreed upon by both sides, the receiving end can identify which packets are missing and inform the sending end. The sender then attempts to deliver the same packets again to the destination. These packets may also establish specific routes for delivery if needed. Basically, what you do in these cases is improve the logic at each end for much finer control of packet delivery. To some degree, we can see this in a TCP connection, although routes are not typically identified. In DECnet, packet-based virtual circuits are closer to this idea.
Services and service parameters
Data is delivered across an ATM circuit in small blocks known as
cells. The word cell is used rather than packet just to prevent
confusion between the ATM delivery component and what it is actually
carrying (like the ATM cells carrying IP packets.) Each cell is 53
bytes long; five bytes for a cell header and 48 bytes as data
payload. Unlike IP packet headers, the cell header does not include
source or destination addresses, only a virtual circuit identifier
(VCI) and a virtual path identifier (VPI), indicating the circuit in
which it should be delivered. Processing such a short header is very
quick and requires very little intelligence at each point for
delivery. ATM switches these days can process 10 to 20 million cells
per second, reaching possibly 50 million cells in the near future.
The payload of an ATM cell can be of any sort, as long as it can be encoded digitally. You could have digital voice data, IP packets, and even, although redundant, Ethernet packets if you really wanted; the delivery mechanism is independent of the type of data to be delivered. The circuit itself doesn't need to know what is in the cells, just where to get them and where to put them. The payload size is kept fixed and small so that they can be delivered very quickly. However, IP packets vary in size and many are significantly bigger than 48 bytes. This means that the IP packet will need to be broken down or `shredded,' as a common ATM joke goes, into little 48 byte cells, delivered, and then reassembled at the other end. This is one point of contention for many voices against ATM.
There are three important parameters for the different service types available for ATM. The Peak Cell Rate (PCR) determines the maximum rate at any moment at which cells can be delivered across the circuit. A circuit cannot surpass the PCR under normal circumstances. If it increases beyond the PCR, the cells may be dropped along the way, and an error recovery situation will fall in place. The Sustained Cell Rate (SCR) is the continuous average rate that cells are available to pass through a circuit; it is a suggestive parameter for determining service configurations. Finally a Minimum Bit Rate (MBR) indicates the minimal amount of bandwidth that has been reserved for the circuit. The delivery rate does not ever fall below the MBR under normal circumstances. If source has nothing to deliver to the destination, empty cells might be sent instead.
We have only dipped our toes into the ATM pool. In my next column, we will swim into the details of how these service parameters affect communication and the different types or levels of services available. Furthermore, we will examine how ATM networks behave on a wide area scale compared to traditional packet-based networks. Finally we will delve into the politics of ATM networking and the arguments from proponents and opponents.
|
Resources
About the author
Rawn Shah is vice president of RTD Systems & Networking Inc., a Tucson, AZ-based network consultancy and integrator.
Reach Rawn at rawn.shah@sunworld.com.
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-04-1997/swol-04-connectivity.html
Last modified: