Tuesday, October 27, 2015

Asynchronous transfer mode (ATM)







How ATM Works

69 out of 88 rated this helpful Rate this topic
Updated: March 28, 2003
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

How ATM Works

In this section
Asynchronous transfer mode (ATM) is one of many network transmission protocols included in Windows Server 2003. The most commonly used transmission protocol included in Windows Server 20003 is TCP/IP, which is a connectionless protocol. As such, TCP/IP cannot offer some of the advantages that a connection-oriented, virtual circuit, packet-switching technology, such as ATM, can. Unlike most connectionless networking protocols, ATM is a deterministic networking system — it provides predictable, guaranteed quality of service.
The ideal environment in which to use ATM is one that combines computer, voice, and video networking into a single network, and the combination of existing networks into a single infrastructure.

ATM Architecture

ATM is a combination of hardware and software that can provide either an end-to-end network or form a high-speed backbone. The structure of ATM and its software components comprise the ATM architecture, as the following illustration shows. The primary layers of ATM are the physical layer, the ATM layer, and the ATM Adaptation layer.
ATM Architectural Diagram
ATM Architectural Diagram
Each layer and sublayer is described briefly in the following table, “ATM Layers.”
ATM Layers

  

Physical Layer

The physical layer provides for the transmission and reception of ATM cells across a physical medium between two ATM devices. This can be a transmission between an ATM endpoint and an ATM switch, or it can be between two ATM switches. The physical layer is subdivided into a Physical Medium Dependent sublayer and Transmission Convergence sublayer.

PMD Sublayer

The Physical Medium Dependent (PMD) sublayer is responsible for the transmission and reception of individual bits on a physical medium. These responsibilities encompass bit timing, signal encoding, interacting with the physical medium, and the cable or wire itself.
ATM does not rely on any specific bit rate, encoding scheme or medium and various specifications for ATM exist for coaxial cable, shielded and unshielded twisted pair wire, and optical fiber at speeds ranging from 64 kilobits per second to 9.6 gigabits per second. In addition, the ATM physical medium can extend up to 60 kilometers or more by using single-mode fiber and long-reach lasers. Thus it can readily support wide-range connectivity, including a private metropolitan area network. The independence of ATM from a particular set of hardware constraints has allowed it to be implemented over radio and satellite links.

Transmission Convergence Sublayer

The Transmission Convergence (TC) sublayer functions as a converter between the bit stream of ATM cells and the PMD sublayer. When transmitting, the TC sublayer maps ATM cells onto the format of the PDM sublayer, such as the DS-3 interface or Synchronous Optical Network (SONET) frames. Because a continuous stream of bytes is required, unused portions of the ATM cell stream are filled by idle cells. These idle cells are identified in the ATM header and are silently discarded by the receiver. They are never passed to the ATM layer for processing.
The TC sublayer also generates and verifies the Header Error Control (HEC) field for each cell. On the transmitting side, it calculates the HEC and places it in the header. On the receiving side, the TC sublayer checks the HEC for verification. If a single bit error can be corrected, the bit is corrected, and the results are passed to the ATM layer. If the error cannot be corrected (as in the case of a multibit error) the cell is silently discarded.
Finally, the TC sublayer delineates the ATM cells, marking where ATM cells begin and where they end. The boundaries of the ATM cells can be determined from the PMD layer formatting or from the incoming byte stream using the HEC field. The PMD performs the HEC validation per byte on the preceding 4 bytes. If it finds a match, the next ATM cell boundary is 48 bytes away (corresponding to the ATM payload). The PMD performs this verification several times to ensure that the cell boundaries have been determined correctly.

ATM Layer

The ATM layer provides cell multiplexing, demultiplexing, and VPI/VCI routing functions. The ATM layer also supervises the cell flow to ensure that all connections remain within their negotiated cell throughput limits. If connections operate outside their negotiated parameters, the ATM layer can take corrective action so the misbehaving connections do not affect connections that are obeying their negotiated connection contract. The ATM layer also maintains the cell sequence from any source.
The ATM layer multiplexes and demultiplexes and routes ATM cells, and ensures their sequence from end to end. However, if a cell is dropped by a switch due to congestion or corruption, it is not the responsibility of the ATM layer to correct the dropped cell by means of retransmission or to notify other layers of the dropped cell. Layers above the ATM layer must detect the lost cell and decide whether to correct it or disregard it.
In the case of interactive voice or video, a lost cell is typically disregarded because it takes too long to resend the cell and place it in the proper sequence to reconstruct the audio or video signal. A significant number of dropped cells in time-dependent services, such as voice or video, results in a choppy audio or video playback, but the ATM layer cannot correct the problem unless a higher Quality of Service is specified for the connection.
In the case of data (such as a file transfer), the upper layer application must sense the absence of the cell and retransmit it. A file with randomly missing 48-bytes chunks is a corrupted file that is unacceptable to the receiver. Because operations, such as file transfers, are not time dependent, the contents of the cell can be recovered by incurring a delay in the transmission of the file corresponding to the recovery of the lost cell.

ATM Layer Multiplexing and Demultiplexing

ATM layer multiplexing blends all the different input types so that the connection parameters of each input are preserved. This process is known as traffic shaping.
ATM layer demultiplexing takes each cell from the ATM cell stream and, based on the VPI/VCI, either routes it (for an ATM switch) or passes the cell to the ATM Adaptation Layer (AAL) process that corresponds to the cell (for an ATM endpoint).

ATM Adaptation Layer

The ATM Adaptation Layer (AAL) creates and receives 48-byte payloads through the lower layers of ATM on behalf of different types of applications. Although there are five different types of AALs, each providing a distinct class of service, Windows Server 2003 supports only AAL5. ATM Adaptation is necessary to link the cell-based technology at the ATM Layer to the bit-stream technology of digital devices (such as telephones and video cameras) and the packet-stream technology of modern data networks (such as frame relay, X.25 or LAN protocols such as TCP/IP or Ethernet).

AAL5

AAL5 provides a way for non-isochronous (time-dependent), variable bit rate, connectionless applications to send and receive data. AAL5 was developed as a way to provide a more efficient transfer of network traffic than AAL3/4. AAL5 merely adds a trailer to the payload to indicate size and provide error detection. AAL5 is the preferred AAL when sending connection-oriented or connectionless LAN protocol traffic over an ATM network. Windows Server 2003 supports AAL5.
AAL5 provides a straightforward framing at the Common Part Convergence Sublayer (CPCS) that behaves more like LAN technologies, such as Ethernet. The following figure, “Breakdown of an AAL5 Cell Header and Payload,” shows a detailed breakdown of an AAL5 Cell Header and Payload, followed by a detailed description of each of the components.
Breakdown of an AAL5 Cell Header and Payload
Breakdown of an AAL5 Cell Header and Payload
With AAL5, there is no longer a dual encapsulation. The service class frames cells at the CPCS, but not at the Segmentation and Reassembly sublayer to minimize overhead. It also uses a bit in the Payload Type (PT) field of the ATM header rather than a separate SAR framing.
AAL5 is the AAL of choice when sending connection-oriented (X.25 or Frame Relay) or connectionless (IP or IPX) LAN protocol traffic over an ATM network.
AAL5 CPCS Sublayer
The preceding figure, “Breakdown of an AAL5 Cell Header and Payload,” shows the framing that occurs at the AAL5 CPCS sublayer. (Note that only a trailer is added.).
CPCS PDU Payload
The block of data that an application sends. The size can vary from 1 byte to 65,535 bytes. The packet assembler/disassembler (PAD) consists of padding bytes of variable length (0-47 bytes), which create a whole number of cells by making the CPCS PDU payload length a multiple of 48 bytes.
User-to-User Indication
Transfers information between AAL users.
Common Part Indicator
Currently used only for alignment processes so that the AAL5 trailer falls on a 64-bit boundary.
Length of CPCS PDU Payload Field
Indicates the length of the CPCS PDU payload in bytes. The length does not include the PAD.
Cyclic Redundancy Check (CRC)
A 32-bit portion of the trailer that performs error checking on the bits in the CPCS PDU. The AAL5 CRC uses the same CRC-32 algorithm used in 802.x-based networks such as Ethernet and Token Ring.
AAL5 SAR sublayer
Byte-by-byte, the preceding figure, “Breakdown of an AAL5 Cell Header and Payload,” shows the framing that occurs at the AAL5 SAR sublayer. There is no SAR header or trailer added. On the transmitting side, the AAL5 SAR sublayer merely segments the CPCS PDU into 48-byte units and passes them to the ATM layer for the final ATM header.
On the receiving side, the sublayer reassembles a series of 48-byte units and passes the result to the CPCS. The AAL5 SAR uses the third bit in the PT field to indicate when the last 48-byte unit in a CPCS PDU is being sent. When the ATM cell is received with the third bit of the PT field set, the ATM layer indicates this fact to the AAL; the AAL then begins a CRC and length-checking analysis of the full CPCS PDU.

ATM Components

To support native Asynchronous Transfer Mode (ATM), NDIS has been updated with native ATM commands. Because many applications do not yet use native ATM services, LANE support was added for LAN applications, such as Ethernet. Similarly, Microsoft has added IP over ATM support, thereby eliminating the additional header cost of LAN packets. Microsoft also added Winsock 2.0 native ATM to support the many applications that use Windows Sockets (Winsock).
Furthermore, circuit connectivity has been added to theTelephony Application Programming Interface TAPI (connection management protocol) to provide complete ATM support. TAPI can now make and receive calls and can redirect them to ATM circuits or from circuits into devices or other network types. Examples include Microsoft DirectShow, as well as Point to Point Protocol (PPP) over ATM as the dial-up remote access protocol on ATM. Using the raw channel access (RCA) kernel streaming filter, TAPI can be used to connect a data stream to the RCA filter containing video, and send it over an ATM circuit, as shown in the following figure, “Windows ATM Services.”
Windows ATM Services
Windows ATM Services
These enhancements allow applications to exploit ATM services, such as Quality of Service (QoS), and with the use of TAPI, achieve a high level of integration between established multimedia features and network protocols.

ATM Call Manager

The ATM signaling component, also known as the UNI call manager, handles virtual channel creation and management. This section describes how the ATM call manager does its job, specifically the handling of both permanent and switched virtual connections.

How the Call Manager Differentiates PVCs and SVCs

Permanent virtual connections (PVCs) are almost identical to switched virtual connections (SVCs), but each PVC must be manually configured, device by device, by an administrator. In contrast, SVCs are dynamically configured when they are established. Each device — from a starting end station through switches to another end station — independently determines its role in supporting a virtual connection It also determines what device to forward the request to, and whether or not it can guarantee the requested Quality of Service at that time. PVC resource allocations are set aside the moment they are first configured, whether or not they are used immediately. SVC resource allocations are allocated dynamically.
The SVC and PVC values are both stored in the internal tables of the ATM call manager, the ATM adapter, and the ATM switch, and the kind of values stored in those tables are identical. The difference between the two kinds of connections lies in how the connection values are handled at initialization. At that time, the ATM call manager checks the registry for any PVCs. If it finds one, it stores its VC number, along with other VC information such as Quality of Service, the process ID (or more generically, the service access point), and the source and destination addresses. It uses a single bit to designate that it is a PVC and not an SVC.
During initialization, the ATM adapter does not know about PVCs. Until someone (typically an administrator) configures an application to use a PVC, applications are not aware of PVCs either. When an application wants to use a PVC, it issues an ATM command through its provided interfaces. The request specifies the destination address, the Quality of Service, and the virtual connection number (among other information). Up to this point, the PVC is handled exactly as if it were a request for an SVC. The call manager receives the request and checks the information received against the entries in its internal table of VCs. If it finds a match, and the match is designated as a PVC in the PVC field in its table, the call manager then handles the rest of the process differently from how it handles an SVC request.
A typical SVC request initiates two commands; the first determines whether the adapter can handle another VC, and the second activates the VC along the path of network components. A PVC request, however, works a little differently. When the call manager receives a request specifying a PVC, it behaves as if the PVC has already been established from end to end. It sends the two initiating calls in rapid succession to the ATM adapter. The ATM adapter never detects that it is working with a PVC. It obtains the Quality of Service and other information from the setup commands and determines how to shape the traffic. From that point, the PVC functions identically to an SVC.

Data Encapsulation over PVC

Support for Ethernet and IP data encapsulation over ATM using VC Multiplexing or LLC/SNAP AAL5 PVCs is new to Windows Server 2003. This support extends to the encapsulation and transport of IP or Ethernet packets over ATM between clients that are connected to a supporting infrastructure using a permanent virtual connection. ATM supports Ethernet and IP data encapsulation by acting as a bridging Ethernet or routing adapter for the TCP/IP protocol.
ATM AAL5 PVC support is similar in concept to ATM LANE, except where LANE uses SVCs instead of PVCs. Windows Server 2003 supports the two encapsulation methods, LLC Encapsulation and VC Multiplexing as described in RFC 2684. Both Ethernet and IP protocols are supported by using either encapsulation method for bridged and routed protocol data units (PDUs). For example PPP over Ethernet, (PPPoE), Layer 2 Tunneling Protocol (L2TP ), Ethernet, or Ethernet encapsulated in IP are some of the protocols supported by using ATM AAL5 PVC support in Windows Server 2003.
ATM AAL5 is implemented as an NDIS intermediate driver, as shown in the following figure, “(NGFD_ATM90) ATM AAL5 NDIS Intermediate Driver.”
(NGFD_ATM90) ATM AAL5 NDIS Intermediate Driver
ATM AAL5 NDIS Intermediate Driver
There are several common scenarios for using ATM AAL5 PVCs. One example is using ATM AAL5 PVC support for remote connectivity from a home or small office that uses an internal ADSL mode. In Windows Server 2003 you configure the ADSL mode as an ATM AAL5 PVC connection, listed as Microsoft Ethernet PVC - RFC 2684 in Windows Server 2003. As shown in the following figure, “(NGFD_ATM91) ADSL Connectivity with ATM AAL5 PVC ,”the ADSL modem connects by using the Public Switched Telephone Network (PSTN) to a Digital Subscriber Line Access Multiplexer (DSLAM) located at your service provider, most likely the central office of your local telephony carrier. The DSLAM either bridges the encapsulated data directly to a network or connects to an external bridge, router, or ATM switch located at your service provider. A connection can then be made to the targeted network, such as your corporate office or the Internet.
(NGFD_ATM91) ADSL Connectivity with ATM AAL5 PVC
ADSL Connectivity with ATM AAL5 PVC

ATM LAN Emulation Module

LAN emulation client services are included in the Windows Server 2003 operating system. When Plug and Play detects an ATM adapter and installs the appropriate driver, the LANE client is also installed by default. This permits full LANE connectivity without the need for configuration, provided that the following conditions exist:
  • The switch has LANE services available and turned on.
  • The LANE services configuration has an enabled default, Emulated Local Area Network (ELAN).
For centralized administration and ease of configuration, this LANE client implementation allows configuration of only the ELAN name. All other ELAN configuration information — such as the Maximum Transmissible Unit (MTU), ELAN type, and LAN Emulation Server (LES) — is obtained from the LAN Emulation Configuration Server (LECS). If a default ELAN is enabled in the LECS, no configuration is required.

ATMARP and ARP MARS

IP over ATM support is included with Windows Server 2003. IP over ATM exposes many features of ATM so that TCP/IP can make use of them directly. With this support, applications written to use TCP/IP can also make use of ATM.
In addition, the client that runs ATMARP (IP over ATM) in Windows Server 2003 supports multicast address resolution by using Multicast Address Resolution Service (MARS). This client contains an ATM ARP/MARS service that enables Windows to act as both an ATMARP server and a MARS with integrated Multicast Server (MCS). The MARS setup allows configuration of a ranges of addresses; the service acts as an MCS for all those addresses.

API Support: Winsock 2.0, TAPI, and NDIS 5.0

All Windows Server 2003 enhancements to SNMP are possible due to extensions to the operating system. The chief extension is a connection-oriented service added to NDIS version 5.0. NDIS 5.0 includes Connection-Oriented NDIS (CoNDIS), a new NDIS API extension for the support of connection-oriented media. These new APIs enable applications and protocols to create VCs and specify Quality of Service for those VCs. CoNDIS supports multiple call managers to enable various media-specific signaling needs, including an ATM-specific call manager. In addition, CoNDIS supports point-to-multipoint connections for efficient multicast services, as shown in the following figure “Supported CoNDIS Multicast Services.”
Supported CoNDIS Multicast Services
Supported CoNDIS Multicast Services
Two components operate on top of NDIS, integrating ATM services with the rest of the operating system and exposing ATM services through well-known APIs. Winsock 2.0 now has direct ATM support by using the Winsock ATM Service Provider. Winsock support provides direct access to ATM services from user-mode applications. With the addition of IP over ATM support, Windows Sockets applications that use TCP/IP as a transport protocol can be run over ATM networks and inter-operate with standard LAN-based IP clients.

NDIS 5.0 ATM Miniport Drivers

Although NDIS 5.0 supports both connectionless and connection-oriented network adapter drivers, only the connection-oriented drivers are of use in an ATM network.
Connection-oriented miniport drivers are always deserialized; that is, they serialize the operation of their own miniport functions and queue all incoming packets internally rather than relying on NDIS to perform the same functions. This improves full-duplex performance if the critical sections of the driver are kept small.
While NDIS library continues to support legacy NDIS 3.0 network adapter drivers, only NDIS 4.0 and 5.0 miniport drivers can take advantage of the enhanced functionality and performance characteristics of the NDIS library support for network adapter drivers.
NDIS has several connection-oriented features. It also contains additional features such as binary compatibility, improved power management by means of Wake On LAN support (which enables a network adapter to turn on a client from a low power state based on packet receipt), and checksum performance in hardware rather than in software. As a result, driver performance is improved over all network types.

TAPI

Telephony Application Programming Interface (TAPI) is responsible for connection setup and other operating system functions related to telephony. In Windows Server 2003, TAPI has been expanded to support telephony over connection-oriented media such as ATM. While TAPI does not handle data directly, it can create a circuit and connect that circuit to another device.
By redirecting calls, TAPI provides more than just high bandwidth and good connectivity. The TAPI component of CoNDIS maps (or proxies) the TAPI call management functions to NDIS 5.0 call management functions, allowing a connection from another medium to be redirected to or from ATM. For example, TAPI can redirect calls to a data handler such as the raw channel access filter, or DirectShow components.
Note
  • DirectShow allows hardware and software vendors to create individual multimedia modules called filters. Multiple filters can be connected by the use of pins and a filter graph. TAPI connects different components, and DirectShow uses the same approach to enable filters and devices to connect to each other.

PPP over ATM

With the advent of Digital Subscriber Line (DSL) technologies, high-speed network access from the home and small office environment is becoming more of a reality. Several standards are being developed in these areas, including Asymmetric DSL (ADSL) and Universal ADSL (UADSL or DSL Lite). These technologies operate over a local loop, typically copper wire running between the public telephone network and the home. In most areas, this local loop connects directly to an ATM core network run by a telephone company.
Without changing protocols, ATM over the DSL service preserves the high-speed characteristics and QoS guarantees, which are available in the core. These guarantees create the potential for an end-to-end ATM network to the residence or small office. This network model provides several advantages, including:
  • Protocol transparency
  • Support for multiple classes of QoS with service guarantees
  • Bandwidth scalability
  • An evolution path to newer DSL technologies
Adding the Point-to-Point Protocol (PPP) over this end-to-end architecture adds functionality and usefulness. PPP provides the following additional advantages:
  • Authentication
  • Open System Interconnection (OSI) Layer 3 address assignment
  • Multiple concurrent sessions to different destinations
  • OSI Layer 3 protocol transparency
  • Encryption and compression
These enhancements provide high bandwidth, even over a telephone line with an ATM adapter and this new level of integrated ATM support. In addition, with the adoption of PPP over ATM, little change is required at the ISP level, as telephone companies and ISPs typically use PPP.
Finally, if each VC carrying a PPP session carries only one session, each destination has its own authenticated PPP session, providing per-VC authentication. This provides an extra measure of security. Using Null Encapsulation over AAL5 can further reduce overhead because the protocol multiplexing is provided in PPP.

PPP over ATM and NDISWAN

In earlier versions of Windows operating systems, the Network Driver Interface Specification Wide Area Network (NDISWAN) component both supported operation of standard protocol stacks over WAN media and acted as the PPP engine. As explained earlier, in Windows Server 2003 the NDISWAN component has been extended and the TAPI proxy component added to provide this same support over NDIS 5.0 connection-oriented media, such as ATM.
At initialization, the NDISWAN component, acting as a client to the TAPI proxy, registers itself as the stream handler for PPP data. When the user starts dial-up networking to connect to a network, the dial-up networking module communicates with TAPI to make the phone call. When the request is made on an ATM device, TAPI does two things:
  • Using the TAPI proxy and NDIS 5.0, it uses a call manager to make the telephone call by way of the ATM adapter.
  • When the call connects, it redirects the connection from the adapter by using NDIS 5.0 to NDISWAN.
NDISWAN then handles further network (PPP) negotiation, and ultimately, by means of the LAN and TCP/IP stacks, it connects the user’s computer to the remote network. The important thing to note here is that TAPI makes the call and then gives the resulting connection to another process, in this case NDISWAN.
This connectability enables several new types of applications, such as DVD-quality streaming video, real-time process control, a common standard for both LAN and WAN, and integrated software that pulls together aspects of TV, telephony, and data streams. Many of these applications make use of the raw channel access filter and DirectShow technology.

Support for Raw Channel Access Filtering: DirectShow

DirectShow technology was developed to better integrate multimedia services and to help multimedia developers more easily customize the operating system to their needs. DirectShow allows hardware and software vendors to create individual multimedia modules called filters. Multiple filters can be connected by the use of pins and a filter graph. TAPI connects different components, and DirectShow uses the same approach to enable filters and devices to connect to each other.
The following figure, “Windows COM-Based DirectStreaming,” shows Windows COM-based DirectStreaming by which a Windows Server 2003-based application can handle many categories of real-time inputs.
Windows COM-Based DirectStreaming
Windows COM-Based DirectStreaming
DirectShow has an RCA filter — a simple module that exposes the raw data, whether it is voice, video, or other, to any device that wants to handle it. With NDIS 5.0, the RCA filter can be connected to TAPI. NDIS 5.0 can export ATM VCs as DirectShow pins.

Raw Channel Access Filtering

The Windows Server 2003 support for raw channel access filtering comes by means of the CoNDIS 5.0 driver. The NDIS proxy sets up a call at the ATM layer, running from the proxy to the call manager and on to the client. Unlike many voice or video feeds, this connection is made by using AAL5 rather than AAL1. The analog data transferred through the filter becomes digital data, which is packaged and handled as any other data over an AAL5 connection.

DirectShow Application

An example of using DirectShow and raw channel access support in Windows ATM services is a video-streaming application that delivers current weather information over the telephone. Customers can simply dial a number for the recorded information.
The following steps outline this process:
  1. At initialization, the raw channel access filter registers as the stream handler for voice data.
  2. A user calls a number to get the current weather information.
  3. TAPI receives the call. TAPI redirects the incoming call to the raw channel access filter because it is a voice call.
  4. NDIS 5.0 maps the DirectShow pin to the VC number.
  5. DirectShow searches the filter graph, and the stream starts.
The following figure, “Weather Report Application Using DirectShow RCA Filter,” shows how a weather report application uses the DirectShow raw channel access filter. The path shows how the data is routed through the various protocols from telephony data to the application layer.
Weather Report Application Using DirectShow RCA Filter
Weather Report Using DirectShow RCA Filter

IP Phone Access

Similarly, a user can make a telephone call that is routed across a traditional LAN. This can enable such things as IP-based telephones. Again, TAPI handles the incoming call and uses NDIS 5.0 to connect it to a pin. DirectShow then reformats the data using a real-time protocol filter that goes through UDP/IP to reach an Ethernet card. The resulting connection allows a telephone user to talk to a computer user. This ATM-based network integration crosses previous boundaries between telephone and computer networks.

ATM Cell Structure

At either a private or a public User-Network Interface (UNI), an ATM cell always consists of a 5-byte header followed by a 48-byte payload. The header is composed of six elements, each detailed in the following figure, “Cell Header Structure.”
Cell Header Structure
Cell Header Structure
Generic Flow Control
The Generic Flow Control (GFC) field is a 4-bit field that was originally added to support the connection of ATM networks to shared access networks such as a Distributed Queue Dual Bus (DQDB) ring. The GFC field was designed to give the User-Network Interface (UNI) 4 bits in which to negotiate multiplexing and flow control among the cells of various ATM connections. However, the use and exact values of the GFC field have not been standardized, and the field is always set to 0000.
Virtual Path Identifier
The Virtual Path Identifier (VPI) defines the virtual path for this particular cell. VPIs for a particular virtual channel connection are discovered during the connection setup process for switched virtual connection (SVC) connections and manually configured for permanent virtual connection (PVC) connections. At the UNI, the VPI length of 8 bits allows up to 256 virtual paths. VPI 0 exists by default on all ATM equipment and is used for administrative purposes such as signaling to create and delete dynamic ATM connections.
Virtual Channel Identifier
The Virtual Channel Identifier (VCI) defines the virtual channel within the specified virtual path for this particular cell. Just as with VPIs, VCIs are also discovered during the connection setup process for switched virtual connection (SVC) connections and manually configured for permanent virtual connection (PVC) connections. The VCI length of 16 bits allows up to 65,536 virtual channels for each virtual path. VCIs from 0 to 15 are reserved by the International Telecommunication Union ITU and VCIs from16 to 32 are reserved by the ATM Forum for each virtual path. These reserved VCIs are used for signaling, operation and maintenance, and resource management.
The combination of VPI and VCI values identifies the virtual channel for a specified ATM cell. The VPI/VCI combination provides the ATM forwarding information that the ATM switch uses to forward the cell to its destination. The VPI/VCI combination is not a network layer address such as an IP or IPX network address.
The VPI/VCI combination acts as a local identifier of a virtual channel and is similar to the Logical Channel Number in X.25 and the Data Link Connection Identifier (DLCI) in Frame Relay. At any particular ATM endpoint or switch, the VPI/VCI uniquely identifies a virtual connection to the next ATM endpoint or switch. The VPI/VCI pair need not match the VCI/VPI used by the final destination ATM endpoint.
The VPI/VCI combination is unique for each transmission path, that is for each cable or connection to the ATM switch. However, two different virtual channels on two different ports on an ATM switch can have the same VPI/VCI without conflict.
Payload Type Indicator
The Payload Type Indicator is a 3-bit field. Its bits are used as follows:
  • The first bit indicates the type of ATM cell that follows. A first bit set to 0 indicates user data; a bit set to 1 indicates operations, administration and management (OA&M) data.
  • The second bit indicates whether the cell experienced congestion in its journey from source to destination. This bit is also called the Explicit Forward Congestion Indication (EFCI) bit. The second bit is set to 0 by the source. If an interim switch experiences congestion while routing the cell, it sets the bit to 1. After it is set to 1, all other switches in the path leave this bit value at 1.
  • Destination ATM endpoints can use the EFCI bit to implement flow control mechanisms to throttle back on the transmission rate until cells with an EFCI bit set to 0 are received.
  • The third bit indicates the last cell in a block for AAL5 in user ATM cells. For non-user ATM cells, the third bit is used for OA&M functions.
Cell Loss Priority
The Cell Loss Priority (CLP) field is a 1-bit field used as a priority indicator. When it is set to 0, the cell is high priority and interim switches must make every effort to forward the cell successfully. When the CLP bit is set to 1, the interim switches sometimes discard the cell in congestion situations. The CLP bit is very similar to the Discard Eligibility bit in Frame Relay.
An ATM endpoint sets the CLP bit to 1 when a cell is created to indicate a lower priority cell. The ATM switch can set the CLP to 1 if the cell exceeds the negotiated parameters of the virtual channel connection. This is similar to bursting above the Committed Information Rate in Frame Relay.
Header Error Control
The Header Error Control (HEC) field is an 8-bit field that allows an ATM switch or ATM endpoint to correct a single-bit error or to detect multi-bit errors in the first 4 bytes of the ATM header. Multi-bit error cells are silently discarded. The HEC only checks the ATM header and not the ATM payload. Checking the payload for errors is the responsibility of upper layer protocols.

Virtual Paths and Virtual Channels

ATM uses virtual paths and channels to logically divide the bandwidth of the transmission path.
The following figure, “Channels within a Path inside the Transmission Medium,” shows how virtual paths and channels are divided within the transmission medium.
Channels Within a Path Inside the Transmission Medium
Channels in a Path Inside the Transmission Medium
Transmission Path
The transmission path consists of the physical cable connected to a particular port of an ATM switch. The cable has a defined bandwidth, such as 155 megabits per second for an Optical Carrier-3 (OC-3) optical fiber link.
Virtual Path
The bandwidth of the transmission path is logically divided into separate virtual paths and identified using the VPI in the ATM header. Each virtual path is allocated a fixed amount of bandwidth. Virtual paths do not dynamically vary their bandwidths beyond what has been allocated.
Virtual Channel
The bandwidth of a virtual path is logically divided into separate virtual channels using a virtual channel identifier in the ATM header. Unlike virtual paths, virtual channels share the bandwidth dynamically within a virtual path.

Switching Hierarchy

The transmission path to virtual path to virtual channel hierarchy is the basis for ATM switching. ATM can switch cells at the transmission path, virtual path and virtual channel level.

Switching at the transmission path level

Switching at the transmission path level allows an ATM switch to determine which output port to use to forward the cell.

Switching at the virtual path level

Switching at the virtual path level allows entire groups of virtual channels to be switched at the same time. Virtual path switching is similar to the telephone system cross-connect switching of entire groups of telephone calls based on the area code of the phone number. The switching occurs based on the area code, not the 7-digit individual phone number.
When performing virtual path switching, an ATM only takes action on the virtual path identifier in the ATM cell header. This ability to bypass the rest of the header makes virtual path switching faster than virtual channel switching.
ATM virtual path switching most often occurs within the public networks of ATM service providers because this virtual path switching allows ATM service providers to aggregate bundles of virtual channels along high-speed backbone links. These aggregate channels create trunk line structures very similar to those used in telephone networks.

Switching at the virtual channel level

Switching at the virtual channel level allows for precise switching and bandwidth allocation. Virtual channel switching resembles switching a phone call to its final 7-digit location; that is, ATM switching is based on the entire VPI/VCI, just as the final phone switching is based on the entire 10-digit phone number (3-digit area code plus 7-digit individual phone number).
ATM virtual channel switching occurs within both private and public networks. A switch must analyze both the virtual path identifier and the virtual channel identifier to make a switching decision.

Quality of Service

As part of the negotiated connection, ATM endpoints establish a service contract that guarantees a specific quality of service. These Quality of Service (QoS) guarantees are not offered by traditional LAN technologies.
With a traditional LAN, any notion of service guarantee is based on priority, where one transmission receives delivery preference over others. Because the sending end station does not know the condition of the network or the recipient of the data prior to transmission (traditional LANs are connectionless), traffic is subject to delay at routers and elsewhere. These unforeseen delays make bandwidth availability and delivery times difficult to predict. While higher-priority traffic typically reaches its destination prior to lower-priority traffic, it is possible that the higher priority traffic arrives too late for isochronous traffic.
Note
  • The term “QoS” applies to several forms of quality guarantees, including ATM QoS and generic quality of service. Of the three, only ATM QoS is implemented at the hardware level.
ATM offers precise and explicit service guarantees that are not based on a relative structure (such as priority). With ATM, a data supplier can request a specific bandwidth, maximum delay, and delay variation tolerance. Each ATM switch then determines whether or not it can meet the request after taking current allocations into consideration. If it can accommodate the transmission, it guarantees the service level and allocates the necessary resources. With ATM, the service contract is enforced, and the bandwidth is allocated at the hardware level. All switches between the sender and receiver agree to the service level before the contract is granted. The source station hardware, also having agreed to the contract, is responsible for shaping the traffic to fit the connection contract before it enters the network.
ATM offers five service categories.
Constant Bit Rate (CBR)
Specifies a fixed bit rate. Data is sent in a steady stream with low cell loss. This is an expensive service because the granted bandwidth must be allocated, whether or not it is actually used. CBR is typically used for circuit emulation. This category is supported in Windows Server 2003. 
Variable Bit Rate (VBR)
Specifies a throughput capacity over time, but data is not sent at a constant rate. This also specifies low cell loss. It is available in two varieties, real-time VBR for isochronous applications and non-real-time VBR for all others.
Available Bit Rate (ABR)
Ensures a guaranteed minimum capacity but allows data to be sent at higher capacities when the network is free. ABR adjusts the rate of transmission based on feedback, which specifies low cell loss. ABR provides better throughput than VBR, but is less expensive than CBR. It is important to note that ABR has only recently been fully defined, and not all hardware and software support this service category. It is part of the UNI 4.0 specification.
Unspecified Bit Rate (UBR)
Does not guarantee bandwidth or throughput, and cells can be dropped. A UBR connection does not have a contract with the ATM network. This category is supported in Windows Server 2003.
Weighted Unspecified Bit Rate (WUBR)
The newest service category put forward by the ATM Forum, and it functions by assigning different processing priorities to different types of traffic, similar to a traditional connectionless LAN. Each such type of traffic is carried across a different connection; cells in connections with lower priority are dropped before those with higher priority.
Guaranteed QoS allows ATM to support time-sensitive (isochronous) applications, such as video and voice, as well as more conventional network traffic. While 100-megabit Ethernet and other high-speed networks can provide comparable bandwidth, only ATM can provide the QoS guarantees required for real-time telephony, VCR-quality video streaming, CD-quality sound, smooth videoconferencing, and other delay-sensitive voice and video applications.
QoS is so vital to the industry that several initiatives are underway to provide QoS support for connectionless TCP/IP–based networks. While these solutions are useful, they require that all nodes on the network participate, which can be difficult to guarantee on heterogeneous networks. Because these solutions shape the traffic in software, latency and variations in delay are sometimes introduced. This is not the case with ATM.
Most importantly, the acceptance of ATM as a common standard for both LANs and WANs enables an enterprise to deploy QoS applications and integrated services. The deployment of ATM/Asymmetric Data Subscriber Line (ADSL) to the home enables residential access to these services. ADSL uses existing copper twisted-pair telephone lines to transmit broadband data to the home, without requiring recabling or a new telephone infrastructure. This extends the reach of ATM networks from the home desktop to the business desktop and everywhere in between.

ATM Addresses

ATM addresses are needed to support the use of virtual connections through an ATM network. At the simplest level, ATM addresses are 20 bytes in length and composed of three distinct parts. The following figure, “Simplified View of ATM Addressing,” shows the three parts of the 20-byte ATM address.
Simplified View of ATM Addressing
Simplified View of ATM Addressing
This ATM address breaks down into the following three basic parts:
ATM switch identifier
The first 13 bytes identify a particular switch in the ATM network. The use of this portion of the address can vary considerably depending on which address format is in use. Each of the three major ATM addressing schemes in use provides information about ATM switch location differently. The three formats are the data country/region code (DCC) format, international code designator (ICD) format, and the E.164 format proposed by the ITU Telecommunication Standardization Sector (ITU-T) for international telephone numbering use in broadband Integrated Services Digital Network (ISDN) networks.
Adapter MAC address
The next 6 bytes identify a physical endpoint, such as a specific ATM adapter, using a media access control (MAC) layer address that is physically assigned to the ATM hardware by the manufacturer. The use and assignment of MAC addresses for ATM hardware is identical to MAC addressing for other Institute of Electrical and Electronic Engineers (IEEE) 802.x technologies, such as Ethernet and Token Ring.
Selector
The last byte is used to select a logical connection endpoint on the physical ATM adapter.
Although all ATM addresses fit this basic three-part structure, there are significant differences in the exact format of the first 13 bytes of any given address, depending on the addressing format that is being used or whether the ATM network is for public or private use.
In summary, the 20-byte ATM address is in the hierarchical format starting with the switch at the highest level, down to the adapter, and then down to the logical endpoint.
All of the three ATM address formats that are currently in widespread use (DCC, ICD, and E.164) include the following characteristics:
  • Compliance with the Network Service Access Point addressing plan as proposed by the Open Standards Interconnection protocol suite of the International Standards Organization.
  • Each can be used to establish and interconnect privately-built ATM networks that support switched virtual connections (SVCs).

Addressing in Detail

The type of ATM address used depends on whether the addresses are for a public or private ATM network. ATM addresses are used to establish virtual channels between ATM endpoints. The following figure, “Primary ATM Address Formats,” shows the three primary address formats.
Primary ATM Address Formats
Primary ATM Address Formats
The most important fields from these address formats are listed in the following table, “Primary ATM Address Format Fields.”
Primary ATM Address Format Fields

 

Address FieldsFunction
AFI
The single-byte authority and format identifier (AFI) identifies the type of address.
DCC
The defined values are 45 for E.164, 47 for ICD and 39 for DCC addresses.
AA
This single byte identifies the-specific part (DSP) of the address.
Reserve
Reserved for future use.
RD
2 bytes of routing domain information.
Area
2 bytes of area identifier.
ESI
6 bytes of end-system identifier, which is an IEEE 802.x MAC address.
SEL
1 byte of NSAP selector.
ICD
2 bytes of ICD.
E.164
8 bytes (16 digits) of the ISDN telephone number.

ATM Connection Types

ATM connections between endpoints are not distinguished only by their various Quality of Service parameters and the formats of their addressing schemes. They also fall into one of two larger categories: point-to-point connections and point-to-multipoint connections. The connection types that any particular ATM connection uses depend on how ATM signaling builds its connection.

Signaling

Signaling components exist at the end station and at the ATM switch. The signaling layer of ATM software is responsible for creating, managing, and terminating switched virtual connections (SVCs). There are two signaling standards:
User Network Interface
The ATM standard wire protocol implemented by the signaling software
Network Network Interface
The way one ATM switch signals another ATM switch
ATM Signaling
ATM Signaling

Point-to-Point Connection

When process that uses ATM seeks to connect to another process elsewhere on the network, it asks the signaling software to establish an SVC. To do this, the signaling software sends an SVC creation request to the ATM switch using the ATM adapter and the reserved signaling VC. Each ATM switch forwards the request to another switch until the request reaches its destination. An ATM switch determines which switch to send the request to next based on the ATM address for the connection and the internal network database (routing tables) of the switch. Each switch also determines if the need of the request for service category and Quality of Service can be met. At any point in this process, a switch can refuse the request.
If all the switches along the path can support the virtual channel as requested, the destination end station receives a packet that contains the VC number. From that point on, the process that uses ATM can interact with the destination process directly by sending packets to the VPI/VCI that identify the specified VC.
The ATM adapter shapes data traffic for each VC to match the contract made with the ATM network. If too much data is sent for any reason, the ATM switch can ignore — and lose — the data in favor of providing bandwidth to another contract or set of contracts. This is true for the entire breadth of the network; if bandwidth or speed exceeds the limits established by the contract, any device, including the ATM adapter, can simply drop the data. If this happens, the end stations concerned are not notified of the cell loss.

Point-to-Multipoint Connection

Unlike a standard LAN environment, ATM is a connection-oriented medium that has no inherent capabilities for broadcasting or multicasting packets. To provide this ability, the sending node can create a virtual channel to all destinations and send a copy of the data on each virtual channel. However, this is highly inefficient. A more efficient way to do this is through point-to-multipoint connections. Point-to-multipoint connects a single source endpoint, known as the root node, to multiple destination endpoints, known as leaves. Wherever the connection splits into two or more branches, the ATM switches copy cells to the multiple destinations.
Point-to-multipoint connections are unidirectional; the root can transmit to the leaves, but the leaves cannot transmit to the root or to each other on the same connection. Leaf-to-node and leaf-to-leaf transmission requires a separate connection. One reason for this limitation is the simplicity of AAL5 and the inability to interleave cells from multiple payloads on a single connection.

LAN Emulation

LAN Emulation (LANE) is a group of software components that allows ATM to work with legacy networks and applications. With LAN emulation, you can run your traditional applications and protocols that use LAN on an ATM network without modification.
LANE makes the ATM protocol layers appear to be an Ethernet or Token Ring LAN to overlying protocols and applications. LAN emulation provides an intermediate step between fully exploiting ATM and not using ATM at all. LANE can increase the speed of data transmission for current applications and protocols when ATM is used over high speed media; unfortunately, LANE does not take advantage of native ATM features such as QoS. However, LANE does allow your current system and software to run on ATM, and it facilitates communication with nodes attached to legacy networks.

LANE Architecture

LANE consists of two primary components: the LANE client and the LANE services. The LANE client allows LAN protocols and applications that use LAN to function as if they were communicating with a traditional LAN. It exposes LAN functionality at its top edge (to users) and native ATM functionality at its bottom (to the ATM protocol layers).
The LANE services are a group of native ATM applications that hide the connection-oriented nature of ATM from connectionless legacy protocols. These services maintain the databases necessary to map LAN addresses to ATM addresses, thus allowing the LANE clients to create connections and send data.
The LANE services components can reside anywhere on an ATM network, but most ATM switches are included with LANE services components installed. Therefore, for practical purposes, LANE services reside on an ATM switch or group of switches.
The three primary LANE services are the LAN emulation configuration server (LECS), the LANE server (LES), and the Broadcast and Unknown server (BUS). The LECS distributes configuration information to clients, allowing them to register on the network. The LES manages one or more Emulated LANs (ELANs). It is responsible for adding members to the ELAN, maintaining a list of all the members of ELAN, and handling address resolution requests for the LANE clients. The BUS handles broadcast and multicast services, as shown in the following figure. “LANE Client, LECS, LES, and BUS.”
LANE Client, LECS, LES, and BUS
LANE Client, LECS, LES, and BUS

Lane Client Interaction with ATM Network

When the LANE client seeks to join the network, the first thing it must do is find the LECS because the LECS gives the client the address of the LES managing the ELAN that it seeks to join. Without the LES address, the client cannot communicate with other members of the ELAN. At initialization the client has neither established a connection to any ATM switch nor to the switch or other entity containing the LECS. The client must establish an ATM connection, preferably a connection directly to the configuration server.
If the ATM network only has a single ATM switch, and the switch contains all the LANE services, then finding the LECS is easy. However, if the network has multiple switches, the local switch to which the LANE client has immediate access might not have LANE services running on it. Fortunately, LANE includes several established mechanisms for a LANE client to discovery the LECS.

LECS Discovery

The LANE client can use any of the following techniques when attempting to connect to the LECS:
  • It can use a well-known ATM address, defined in the ATM protocol.
  • It can use a well-known VC.
  • It can query using the Integrated Local Management Interface (ILMI).
Both the well-known ATM address and the well-known VC are standardized. Most switches and clients are preconfigured with this information. In most cases, the LANE client can find the LECS using one of these methods. However, if the well-known values have been changed at the end station or at the switch, either type of discovery becomes unsuccessful.
If this happens, the LANE client can fall back on ILMI, a protocol standard (similar to Simple Network Management Protocol) designed for ATM administrative and configuration purposes. ILMI provides a query function that the LANE client can use to find the LECS address, and then set up a VC to it.
After the client has discovered the LECS and connected to it, the client asks the LECS to provide configuration information to allow it to connect to a particular ELAN. It does this by sending one or more pieces of information about the desired ELAN, such as the LAN type (Ethernet or Token Ring), the maximum packet size, and the name of the LAN.
The LECS takes the information from the LANE client and reads its table of ELANs to find a match. When it locates the correct ELAN, it returns that address to the LANE client.

LES Address Matching

With the information provided by the LECS, the LANE client can join the ELAN. To do this, it sends an emulated LAN address and its true ATM address to the LES. The LES registers this information. From then on, the LANE client can send and receive data over the ATM network as if it were using a normal LAN.
When the LANE client receives a request from a protocol (such as TCP/IP, IPX, or NetBEUI) to send information to another point in the ELAN, it sends the destination LAN address to the LES. The LES looks for a match in its database, and then returns the true ATM address to the LANE client. The client then sets up a normal VC between itself and the destination, and subsequent data traffic is sent directly on this VC without any further intervention by the LES or the other LANE services. While this address resolution request is being processed, interim traffic is sent to the BUS and copied from there to all stations in the ELAN.
If the LES does not find a match for the destination address, the data is sent to the Broadcast and Unknown Server (BUS). The BUS attempts to deliver the data to the unknown client.

BUS Distribution

The BUS does two things: it handles distribution of data to unknown clients and it emulates LAN broadcast services. If the LES cannot find a particular ELAN client, the data is sent to the BUS for distribution, and the BUS forwards it to all the clients of the ELAN.
The BUS also handles broadcasts. It registers its address with the LES identical to any other client. It registers under the address of F (x16), which is the normal LAN address for a broadcast message. When a LANE client protocol wants to broadcast a message to the entire LAN, it addresses the message to F (x16) and passes it on. The LEC sends this address to the LES for resolution, and the LES returns the ATM address of the BUS. The LEC can then send the message to the BUS. The BUS maintains a list of all clients on the ATM network and sends the message to all clients. The BUS service is typically located in the same piece of equipment with the LES.

Integrated Local Management Interface

The ILMI resides on an ATM switch and provides diagnostic, monitoring, and configuration services to the User-Network Interface. The ILMI is defined by the ATM Forum and uses the Simple Network Management Protocol (SNMP) and a Management Information Base (MIB). It runs over AAL3/4 or AAL 5 with a default VPI/VCI of 0/16.
The ILMI MIB contains data describing the physical layer, the local VPCs and VCCs, network prefixes, administrative and configuration addresses, ATM layer statistics, and the ATM layer itself. The most common client-oriented function of the ILMI is to assist a client during LECS discovery.

TCP/IP over ATM

The protocol for classical IP over ATM (sometimes abbreviated as CLIP/ATM) is a well-established standard spelled out in RFC 1577 and subsequent documents. Windows Server 2003 provides a full implementation of this standard.
The IP over ATM approach provides several advantages over ELAN solutions. The most obvious advantages are its ability to support QoS interfaces, its lower overhead (as it requires no MAC header), and its lack of a frame size limit.

IP over ATM Architecture

IP over ATM is a group of components that do not necessarily reside in one place. When they don’t reside in one place, the services are not typically on an ATM switch. For the purposes of this discussion, it is assumed the IP over ATM server services reside on a Windows Server 2003 server.
The core components required for IP over ATM are roughly the same as those required for LANE, as both approaches require the mapping of a connectionless medium to a connection-oriented medium, and vice versa. In IP over ATM, these services are provided by an IP ATMARP server for each IP subnet. This server maintains a database of IP and ATM, and provides configuration and broadcast services.

IP over ATM Components

IP over ATM is a very small layer between the ATM protocol and the TCP/IP protocol. As with LANE, the client emulates standard IP to the TCP/IP protocol at its top edge while simultaneously issuing native ATM commands to the ATM protocol layers underneath.
IP over ATM is often preferred to LANE because it is faster than LANE. A primary reason for this performance advantage is that IP over ATM adds almost no additional header information to packets as they are handed down the stack. After it has established a connection, the IP over ATM client typically can transfer data without modification.
As with LANE, IP over ATM is handled by two main components: the IP over ATM server and the IP over ATM client. The IP over ATM server is composed of an ATMARP server and Multicast Address Resolution Service (MARS). The ATMARP server provides services to map network layer IP unicast addresses to ATM addresses, while MARS provides similar services for broadcast and multicast addresses. Both services maintain IP address databases just as LANE services do.
The IP over ATM server can reside on more than one computer, but the ATMARP and MARS databases cannot be distributed. You can have one IP over ATM server handle ATMARP traffic, and one handle MARS. If, however, you divided the ATMARP Server between servers, it effectively creates two different IP networks. All IP over ATM clients in the same logical IP subnet (LIS) need to be configured to use the same ATMARP server. Traditional routing methods are used to route between logical IP subnets, even if they are on the same physical network.
Windows Server 2003 includes fully integrated ATMARP and MARS servers.

ATMARP Server

The IP over ATM client and ATMARP server go through a process similar to the LANE client and the LECS when a client joins the network and discovers other network members. As with LANE, after an address is found, native ATM takes over and TCP/IP packets are sent across a VC from end station to end station. There is, however, a major difference in how the IP over ATM client discovers the ATMARP server.

ATMARP Server Discovery

Because the ATMARP server typically resides on a server rather than on an ATM switch, it is not possible to use ILMI or a well-known VC to discover its address. In fact, there is no default IP over ATM mechanism for server discovery. To start using IP over ATM, an administrator must find the ATM address of the appropriate ATMARP server and manually configure each IP over ATM client with this address. In a single ATM switch network, this is not much of a problem, but in larger networks it can become a demanding job. To ease configuration in smaller networks, ATM ARP/MARS servers and ATM ARP/MARS clients running Windows Server 2003 use a default address.
After the ATMARP server has been discovered, the IP over ATM client can use this server to resolve IP to ATM address mappings and communicate with other computers. The ATMARP server supports only unicast traffic. To send packets to a broadcast address or multicast list, the IP over ATM client uses MARS.

MARS

Mimicking the role of the BUS in LAN emulation, MARS handles distribution of broadcast and multicast messages to all the members of the network or to all members of a multicast group. Because of the potential for blockage, MARS provides two modes of operation. When an ATMARP client receives a request to send a packet to a multicast or broadcast IP address, it sends a request to the MARS to resolve this address to a list of clients that are members of that group. In one mode of operation, the MARS return a list of all ATM addresses to which the group address resolves. The client then creates a Point-to-Multipoint (PMP) ATM connection to all of these addresses and forwards the packet on that connection.
The other mode of operation involves a multicast server (MCS). The MCS registers interest in one or more multicast groups with the MARS. The MCS receives information describing the membership of that group, as well as updates when clients join or leave that group. When a client requests a group address resolution from the MARS, the MARS simply returns the single address of the MCS. The packet is then sent to the MCS, which creates the PMP connection and distributes the packet to all members of the group.
The following figures are examples of the VCs that are created in each of these modes.
IP Multicast over ATM Connections Without MCS
IP Multicast over ATM Connections Without MCS
IP Multicast over ATM Connections with MCS
IP Multicast over ATM Connections with MCS
The disadvantage of the first method, in which each client sending packets to the group creates its own PMP connection to all other members of the group, is the large number of virtual channels required. The disadvantage of the second method, which uses the MCS, is that the MCS becomes both a central point of failure and a potential blockage because it distributes all multicast packets for all of the groups it serves.

IP over ATM Operation

IP over ATM faces the same problems, and relies on the same basic tools and fixes as LANE. In particular, it faces the issues of address resolution and broadcasting.
In normal ATM, SVC connections are established by sending a connection request containing the ATM address of the destination endpoint to the ATM switch. Before an IP endpoint can create an SVC in this manner, the endpoint must resolve the IP address of the destination to an ATM address.
Typically, when an Ethernet host needs to resolve an IP address to an Ethernet MAC address, it uses an ARP broadcast query frame. Hardware broadcasting is not done in ATM. The Address Resolution Protocol (ARP) of the ATMARP server resolves IP addresses to ATM addresses.
An ATM endpoint that needs to resolve an IP address sends an ATMARP request to the ATMARP server for its LIS. The ATMARP request contains the ATM address, IP address, and the requested IP address of the sender. If the ATMARP server accepts the requested IP address, it sends back an ATMARP response containing the requested ATM address. If the requested IP address is not found, the ATMARP servers send back a negative ATMARP reply, unlike the procedure in an ELAN, which sends an unresolved address to the LANE BUS. This behavior allows an ARP requestor to distinguish between an unknown address and non-functioning ATMARP server.
The end result is a three-way mapping from the IP address to an ATM address to a VPI/VCI pair. The IP address and ATM address are required to create a VC. The IP address and VPI/VCI then are required to send the subsequent cells containing data across the VC.
An ATM endpoint creates SVCs to other ATM endpoints within its LIS. For an ATM endpoint to resolve an arbitrary IP address, it must be configured with the ATM address of the ATMARP server in its LIS.
When the server starts, an ATM endpoint establishes a VC with the ATMARP server using ATM signaling. As soon as the VC is opened with the server, the server sends the ATM endpoint an ATMARP request. When the ATM endpoint sends the response, the ATMARP server has the ATM and IP address of the new ATM endpoint. In this way, the ATMARP server builds its table of ATM to IP address mappings.

IP over ATM Client Initialization

In Windows Server 2003, IP over ATM does not require the use of an Inverse ARP. Instead, the client goes directly to the server to register itself on the network. Because the process is automatic, no human intervention is required to initialize the client. Depending on whether the client address is mapped to a static address or a dynamic address, the procedure varies between the following two approaches.
With a static IP address
The following example details each step in establishing an IP over ATM connection for a single IP over ATM client with a static IP address. First, the client initializes and gets an ATM address from the ATM switch. The client then connects to the ATM ARP/MARS server and joins the broadcast group. The IP-to-ATM address mapping of the client is also added to the ATMARP server database. The client is now ready to contact other hosts and begin data transfer.
With DHCP
Establishing an IP over ATM connection for a single IP over ATM client using Dynamic Host Configuration Protocol (DHCP) is similar but not identical. First the client initializes and gets an ATM address from the ATM switch. Then the client connects to the ATM ARP/MARS server and joins the broadcast group. The client connects to the multicast server (MCS) and sends a DHCP request. The MCS broadcasts the DHCP request to all members of the broadcast group.
When the DHCP server receives the request, it sends a DHCP reply to the MCS. The MCS then broadcasts the reply to the broadcast group. The client receives the DHCP reply and then registers its IP and ATM addresses with the ATM ARP/MARS server. The client is now ready to contact other hosts and begin data transfer.

Logical IP Subnets

A LAN-based IP internetwork consists of a series of cabling plants separated by IP routers. The cabling plants connect the hosts of an IP network or subnet together and the routers connect the networks and subnets to each other. An IP host on a particular network can send IP packets directly to a host on the same network by addressing the packet to the media access control (MAC) address of the destination host. An IP host can send IP packets to hosts on other networks by addressing the packet to the MAC address of the router. This paradigm can connect hundreds (or thousands) of hosts on the same network to hundreds (or thousands) more on other networks. While this configuration works well for connectionless, broadcast-based technologies such as Ethernet and Token Ring, it is important to use care when attempting to create the same situation with ATM. The following figure, “Two LISs Running on a Single Switch,” shows two LISs running on a single switch.
Two LISs Running on a Single Switch
Two LISs Running on a Single Switch
Before a single IP packet can be sent, a connection must be created between the source and destination at the ATM layer. On an ATM network using SVC, the path must be negotiated between switches so that the sender has a valid VPI/VCI address to which to send the ATM cells. While possible, it is not very practical to have hundreds (or thousands) of ATM endpoints on the same IP network. A host, such as a network server, cannot send an arbitrarily large number of VCs to other hosts in the network. More VCs mean more overhead and resources for the both the operating systems and the hardware (ATM adapters and switches) of the IP network. If connecting across an ATM service provider, more VCs also mean more cost.
The logical IP subnet (LIS) is a way of constraining the number of ATM endpoints in an IP network or subnet. The LIS is a group of IP hosts that share a common IP network number; these hosts communicate with each other directly using ATM virtual channels. Different logical IP subnets can be created on the same ATM switch to create a virtual IP internetwork.
Note the example in the previous figure, “Two LISs Running on a Single Switch.” When hosts in LIS 131.107.56.0/24 are ready to communicate among each other, they establish a VC with each other (direct delivery). When hosts in LIS 131.107.56.0/24 are ready to communicate with hosts in LIS 131.107.68.0/24, they establish a VC with the router and send an IP packet to the router (indirect delivery). The router then establishes its own VC with the destination host and forwards the IP packet.
An IP router belongs to multiple LISs and is configured with multiple IP addresses and subnet masks. If the router has a single ATM interface (and therefore a single ATM address), it can either use the single ATM address (with the unique End System Identifier) or use multiple ATM addresses by varying the last byte in the 20-byte ATM address (the SEL field). To give clients a secondary point of access, a router uses multiple addresses primarily in the case of a server failure, when a route is no longer functional.

Services at an ATM Switch

The overall scheme of ATM protocols, hardware, and interconnections comes together as shown in the following figure, “Overview of ATM Architecture from Desktop to WAN.” The hardware includes ATM clients, ATM switches, and the blades that function as edge devices between a device or application that uses ATM and another protocol or environment. The connections include SVCs and permanent virtual connections (PVCs), all handled by the local switch shown at the center of the diagram.
In addition, the following figure, “Overview of ATM Architecture from Desktop to WAN,” shows LANE clients joining the local switch, as well as a client being redirected from Local Switch 2 by the LECS to the BUS and LES services on Local Switch 1. The bottom layer is the client section, showing all the ARP, ATM, and other clients of the local switch. (With the exception of a remote access client, which is shown dialing into the switch at the top of the diagram). The various forms of edge devices are shown in the central section, and remote connections are shown at the top.
Overview of ATM Architecture from Desktop to WAN
ATM Architecture from Desktop to WAN Overview

Additional Resources

The following resources contain additional information that is relevant to this section.
  • ITU-T Q.2610 Broadband Integrated Services Digital Network Signaling on the ITU Web site.ITU-T Q.2931 Broadband Integrated Services Digital Subscriber Signaling System on the ITU Web site.
  • RFC 1577 in the RFC Database for information about Classical IP and ARP over ATM.
  • RFC 2225 in the RFC Database for information about ATM Signaling support for IP over ATM.
  • RFC 1661in the RFC Database for information about ATMF RFC1661 PPP
  • RFC 2022 in the RFC Database for information about Multicast in UNI 3.0 and UNI 3.1 Networks

Featured Posts

Rental Properties for Sale, Santa Marianita, Ecuador

  Beautiful rental with beach access. Utilities and WiFi are included, just bring your food and move in. *Be sure to ask about our long-term...

Popular Posts