You are on page 1of 11

White Paper

NEXT GENERATION SONET/SDH NETWORKING


TECHNOLOGIES
Brian Lavalle, P.Eng., M.B.A.
Senior Manager, Nortel Networks, Photonic Systems Engineering
email: brianlav@nortelnetworks.com

Nortel Networks is an industry leader and innovator focus ed on transforming how the world communicates and exchanges
information. The company is supplying its service provider and enterprise customers with communications technology and
infrastructure to enable value-added IP data, voice and multimedia services s panning Wireline, Wireless Networks,
Enterprise Networks, and Optical Networks. As a global company, Nortel Networks does business in more than 150
countries. More information about Nortel Networks can be found on the Web at:

www.nortelnetworks.com

For more information, contact your Nortel Networks representative, or


call 1-800-4 NORTEL or 1-800-466-7835 from anywhere in North America.
*Nortel Networks, the Nortel Networks logo, the globemark design, OPTera, a nd Preside are trademarks of Nortel Networks.
All other trademarks are the property of their owners
Copyright 2003 Nortel Networks. All rights reserved. Information in this document is subject to change without notice.
Nortel Networks assumes no responsibility for any errors that may appear in this document.

August 2003

White Paper

Introduction
Given the current economic climate of the embattled telecom industry, service providers are
diligently seeking out new methods to significantly reduce their operational costs, while
simultaneously offering new revenue generating data services. These emerging data services
must fully leverage, and not strand, the huge installed base of SONET/SDH networking
equipment to facilitate a cost-effective migration path. Although numerous technologies to carry
data services exist, the use of certain interrelated technologies is gaining in popularity within the
metro networking market space. In particular, this paper discusses the suite of technologies listed
below as one possible method of meeting the often contradicting goals of introducing new
revenue generating services at a reduced cost.

GFP Generic Framing Procedure [1]


VCAT Virtual Concatenation [2]
LCAS Link Capacity Adjustment Scheme [3]
RPR Resilient Packet Ring [4]
MPLS Multi-Protocol Label Switching [5]
CWDM Coarse Wavelength Division Multiplexing [6]

These technologies complement one another and together facilitate such data services as GbE

(Gigabit Ethernet), FC (Fibre Channel), FICON (Fiber Connection), ESCON (Enterprise System
Connection), IP (Internet Protocol), and PPP (Point-Point Protocol) in the MAN (Metro Area
Network) at a reduced complexity and lower operational cost when compared to existing methods
of transporting these increasingly popular data services over SONET/SDH.

Two Islands of Technology LAN (Ethernet) versus MAN (SONET/SDH)


The primary driver this technology suite is the need to optimize the transport of LAN (Local Area
Network) data services over existing MAN infrastructures. Ethernet dominates the LAN while
SONET/SDH dominates the MAN thus creating two islands of disparate technologies. Ethernet
was designed for data traffic (bursty and high bandwidth) while SONET/SDH was designed for
voice traffic (circuit-based and deterministic). Each technology serves its respective applications
extremely well, which explains their near ubiquity. Until recently, Ethernet over SONET/SDH has
been possible albeit using complex, costly, and/or proprietary mapping schemes. The GFP,
VCAT, LCAS, RPR, MPLS, and CWDM technologies will simplify and cost-reduce this
interworking application.

Generic Framing Procedure (GFP)


GFP was created to replace the different and often proprietary methods currently used to
transport data services over SONET/SDH to simultaneously reduce the overall cost and
complexity. The benefits of standardized demarcation interface and wrapping technique is quite
evident multi-vendor interoperability, new revenue-generating services, reduced networking
costs, lower risk new deployments, and multi-vendor sourcing. GFP may be mapped into both
SONET/SDH and OTN (Optical Transport Network) [7], which represent the present and future of
optical networks.
GFP is L1 (Layer 1) and L2 (Layer 2) independent and subsequently supports the transport of
many data services such as GbE, FC, FICON, ESCON, IP, and PPP with the possibility of future
support for Infiniband [8] and DVB-ASI (Digital Video Broadcasting Asynchronous Serial
Interface) [9] services. There are two flavors of GFP, framed and transparent, each of which
addresses specific target market applications. Each GFP method implements a very different
data service mapping scheme. Their common generic frame constructs is illustrated in Figure 1.
ESCON and FICON are registered trademarks of the IBM Corporati on

2 of 11

White Paper
16-Bit Payload
Length Indicator
Core Header
cHEC (CRC-16)

Payload Headers
(4 to 64 Bytes)

Type
(4 Bytes
Extension
(0 60 Bytes)
Payload Area

Data Service Payload


(4 65,535 bytes for GFP-F)

Optional Payload
FCS (CRC-32)

Figure 1 GFP Frame Structure [1]


GFP-F (Framed Generic Framing Protocol)
GFP-F is PDU (Protocol Data Unit) oriented and is thus targeted towards the mapping of such
client signals as IP, PPP [10], and Ethernet MAC (Media Access Control) frames where one client
frame (e.g. Ethernet MAC frame) is mapped into one GFP-F frame. Figure 2 shows how one
Ethernet MAC frame is encapsulated into one GFP frame.

PLI (2 Bytes)

cHEC (2 Bytes)

Payload Header (4 Bytes)

Ethernet Frame (0 65531 Bytes)

FCS (4 Bytes)

Figure 2 GFP Mapping of an Ethernet MAC Frame


Ethernet frames are inherently variable in length resulting in GFP-F frames that are also variable
in length. Since an Ethernet frame must be fully buffered to determine its length, a variable
latency results which makes GFP-F less attractive for latency-sensitive applications such as FCbased SANs (Storage Area Networks). However, the one-to-one GFP-F mapping mechanism is
much simpler than more complex mapping schemes such as ATM (Asynchronous Transfer
Mode). GFP-F is also less complex to employ than GFP-T, as will be discussed.
GFP-T (Transparent Generic Framing Protocol)
Unlike the frame-mapping nature of GFP-F, GFP-T is block -code oriented and was developed for
mapping such client signals as GbE, FC, FICON, and ESCON, all of which implement 8B/10B
encoding techniques whereby blocks of 8 bits are encoded into 10-bit codes. The 8B/10B
8
10
encoding scheme maps 256 (2 ) possible data values into 1024 (2 ) codes, which are also
referred to as characters. Character assignment is specifically performed to ensure that a
balanced line sequence of ones and zeros exists to create an adequate number of transitions that
will facilitate CDR (Clock and Data Recovery). There are also twelve 10-bit characters (codes)
that are reserved as control codes that may be used by the data source (ingress) node to signal
control information to the data sink (egress) node.
To properly transport data signals employing 8B/10B encoding, the transport vehicle
(SONET/SDH or OTN) must be able to transport both data and control code characters. Since
8B/10B encoding incurs a 25% overhead penalty, it is thus inappropriate (costly) for MAN
transmission. Consequently, the 8B/10B codes are first decoded from 10 bits to the original 8 bits
and then encoded once again into a more efficient 64B/65B fixed-length super block . These
64B/65B super blocks are better suited for MAN transport, because they map more efficiently into

3 of 11

White Paper
the available virtual container bandwidth within SONET/SDH and OTN. Since at least eight
8B/10B characters must be received, decoded, and then encoded once again into a 64B/65B
super block, this processing sequence becomes the minimal latency of GFP-T, which is far less
than that of GFP-F. Framed GFP waits for an entire Ethernet frame to be received, which is
variable in size from 64 to 1518 bytes, resulting in increased latency, which is unsuitable for FCbased networks. As its name implies, GFP-T transparently transports 8B/10B control characters
and data characters in an agnostic manner which is another notable advantage of GFP-T over
GFP-F for 8B/10B encoded data services.
Alternative Data Services Mapping Techniques
There are competing alternatives to GFP mapping for carrying data services over SONET/SDH
based networks such as PoS (Packet over SONET/SDH) [11] and X.86 [12] over SDH. However,
when compared to GFP, these alternative mapping schemes lack the flexibility and simplicity of
GFP and are thus not as desirable a choice for wide-scale deployment. Nevertheless, they are
functional, albeit less efficient, alternatives available today.
PoS (Packet over SONET/SDH) Data Services Mapping
PoS is currently the most common method of mapping Ethernet/IP data services over
SONET/SDH networks via router interfaces that support HDLC (High Level Data Link Control)
[13], which are widely available. However, unlike PoS, GFP does not truncate the L2 MAC
information (e.g. destination/source addresses) of an Ethernet frame thus enabling transparent
connections of L2 edge devices such as Ethernet switches. PoS terminates this L2 MAC
information, which prohibits numerous desirable L2 features such as multicasting, traffic
prioritization (802.1p), Ethernet protection switching, and VLAN (Virtual LAN) filtering (802.1q).
PoS will strip this valuable L2 MAC information, and then subsequently remap it into PPP over
HDLC which creates, in effect, a dumb point-to-point pipe with some significant disadvantages
that are discussed below.
HDLC relies on specific bit patterns (characters) that are used for frame delineation and control
information. Thus, these specific bit patterns, if found within the data payload, would inadvertently
mimic reserved characters and subsequently interfere with the proper operation of HDLC. To
overcome this issue, HDLC injects additional control escape characters (0x7d) adjacent to the
payload bit patterns that mimic reserved characters. While addressing one issue, however, the
addition of control characters introduces another: increased bandwidth is required to carry the
original client information. This is commonly (and rather unflatteringly) referred to as bandwidth
inflation which is non-deterministic. In the worst case, if the client signal is composed solely of
data characters that inadvertently mimic reserved control characters, HDLC used in PoS would
inject escape characters on a one-to-one basis, thereby doubling the bandwidth required to
carrying the original client signal an undesirable (yet quite unlikely) scenario.
To avoid bandwidth inflation that is inherent to HDLC and thus PoS, GFP uses its cHEC (core
Header Error Control) bytes for frame delineation instead. This use of cHEC bytes is actually an
adaptation of the proven ATM cell delineation algorithm and is illustrated in Figure 3. The state
machine illustrates how GFP starts off in the Hunt state and scans bit by bit until a calculated
cHEC matches a subsequent sequence of received bits and results in the delineation of a GFP
frame whereby the Pre-Sync state is reached. Since one GFP frame is successfully delineated,
the next GFP frame starting point is known. If a second GFP frame has a valid cHEC and
subsequent sequence of bits, the Sync state is entered. The Sync state allows for cHEC error
correction but if the received error is not correctable, the Hunt state is reentered and the process
repeats itself. As no special characters are required for GFP to delineate received frames, the
adverse bandwidth inflation issue that is apparent in PoS/HDLC is thus eliminated.

4 of 11

White Paper

Pre-Sync
State

cHEC Match

2nd cHEC Match


2

No 2 nd cHEC Match

Hunt
State

Sync
State

Correctable cHEC Error

Non Correctable cHEC Error

No cHEC Match

Figure 3 GFP Frame Delineation State Machine [1]


X.86 Ethernet over SONET/SDH Data Services Mapping
Another alternative mapping technique is X.86 which provides an Ethernet-only link access for
SDH (not SONET) based networks, and also uses an HDLC-like coding scheme. X.86 supports
concatenated and non-concatenated SDH containers yet lacks the flexibility of GFP which
supports the mapping of other protocols such as FC, FICON, and ESCON. Although X.86 is
somewhat established in certain geographic regions (Asia), its lack of simplicity and flexibility
when compared to GFP makes it a less attractive alternative. GFP also holds other advantages
over X.86 mapping such as GFP header error correction/detection capabilities, as well as channel
identifiers that enable port/service multiplexing on the same given path. Conversely, X.86 is
similar to PoS in that it supports a less flexible, and hence less desirable, one-t o-one mapping of
Ethernet ports on a given path.
X.86 is rather similar to PoS encoding and thus carries the same undesirable implementation
issues as well. However, it does include a rate adaptation feature that adjusts the Ethernet MAC
to different available SDH VC rates. However, its point-to-point nature, coupled with the inherent
disadvantages of HDLC, make it undesirable when compared to GFP. Its acceptance should
diminish over time as the advantages of GFP mapping are realized and products that implement
the GFP mapping scheme become increasingly commercialized.
Primary Benefits of GFP (Framed & Transparent Modes of Operation)
There are numerous benefits associated with GFP when it is used to map different types of data
services into the existing SONET/SDH-based networks as well as the future OTN-based MAN
infrastructures. The primary benefits of GFP are summarized below.

Standardization this will accelerate global acceptance for vendor interoperability and lower
component costs
Versatility GFP enables the transport of many popular data service protocols at L1 and L2.
o L1 GbE, FC, FICON, ESCON, Infiniband (future), and DVB ASI (future).
o L2 Ethernet (e.g. services such as IP/PPP/MPLS encapsulation) and HDLC.
Quality of Service (QoS) strict (GFP-T) or loose (GFP-F) QoS is possible depending upon
the application.
Scalability GFP currently supports data services from 10Mb/s (Ethernet) to 10Gb/s (OC192/STM -64).
Acceptance GFP is endorsed by the IEEE-802.17 RPR (Resilient Packet Ring) Working
group and the IETF.
Simplicity GFP is a simpler mapping technique than ATM and HDLC resulting in less costly
network designs.

5 of 11

White Paper

Header Error Control (HEC) it is based upon a scheme similar to the proven ATM algorithm
and supports fixed (Transparent) and variable (Framed) client frame sizes for added
flexibility. The scheme is not process intensive and is thus conducive to hardware integration
for high speed operation at a reduced cost.

Although GFP has many inherent benefits, it is not sufficient for enabling the end-to-end transport
of data services that are increasingly migrating into the MAN. Other technologies are required to
size the needed SONET/SDH payload carrying capacity, and to dynamically and hitlessly resize
bandwidth. These requirements are addressed by Virtual Concatenation and the Link Capacity
Adjustment Scheme, respectively. Another layer of optimization is addressed by Resilient Packet
Ring with Coarse Wavelength Division Multiplexing determining the overall bandwidth capacity.
These technologies and their interrelation are discussed in subsequent sections of this paper.

Virtual Concatenation (VCAT)


Contiguous concatenation is an integral part of SONET/SDH whereby multiple SPEs
(Synchronous Payload Envelopes) are transported and switched across a SONET/SDH network
as a single connection. The first SONET/SDH container payload pointer is set to normal mode
with the subsequent payload pointers set to concatenation mode, which effectively links the
containers thus creating a single concatenated payload. This rigid scheme dictates that all nodes
in the connection path must support contiguous concatenation. VCAT enhances existing
SONET/SDH networks by decoupling the path establishment from the network resource itself.
The VCAT adaptation function also compensates for the incurred delay variations as VCAT
Group (VCG) members traverse diverse SONET/SDH paths. Allowing each VCG member to
traverse different physical path is a significant benefit.
Unlike contiguous concatenation, constituent SPEs comprising a VCAT connection need not be
time-slot contiguous and can be constructed using multiples of low-order VT-1.5 members,
suitable for network edge applications, or high-order STS-1 VCG members, suitable for network
core applications. This increased flexibility allows carriers to group multiple VCG members (high
or low order) based upon the actual data service to be transported resulting in vastly improved
bandwidth utilization. For instance, to map a 1Gbps Ethernet service into SONET, it was
necessary to use the nearest available bandwidth increment, namely, an OC-48c (2.488Gbps)
resulting in a meager 42% bandwidth utilization as shown in Table 1. Using VCAT, this same
1Gbps Ethernet service can instead be mapped into 21 STS -1 non-contiguous time slots
(denoted STS -1-21v) for an improved utilization of 95%! In essence, stranded bandwidth, utterly
blasphemous in todays economic environment, is vastly reduced.
Data Service Rate
100Mb/s Ethernet
1,000Mb/s Ethernet
ESCON (160Mb/s)
Fibre Channel (850Mb/s)

SONET/SDH
Contiguous Concatenation
STS-3c (67%)
STS-48c (42%)
STS-12c (27%)
STS-48c (35%)

SONET/SDH
Virtual Concatenation
STS-1-2v (100%)
STS-1-21v (95%)
STS-1-4v (82%)
STS-3c-6v (95%)

Table 1 Comparison of Contiguous vs. Virtual Concatenation (VCAT) Data Services Mapping
Only the end points (network ingress and egress points) of a VCAT must be aware of the
relationship between the constituent VCG members. Intermediary nodes need not be aware of
the relationship between VCG members which greatly facilitates the cost-effective introduction of
VCAT into existing SONET/SDH networks. Only the ingress/egress network nodes need be
VCAT enabled. Since VCAT members do not have to follow the same multiplex section,
propagation delays between VCG members can result, especially over long distances. However,
VCAT is able to identify these inevitable propagation delays and perform the necessary
realignment process.

6 of 11

White Paper
All new technologies have associated shortcomings and VCAT is no exception. For instance, if
the path of one of the constituent VCG members fails, the entire VCG fails. VCAT also does not
solve the point-point nature of SONET/SDH which is not conducive to L2 protocols such as
Ethernet. Although one can provision multiple SONET/SDH connections in the form of a mesh
network, this is costly, time-consuming, and results in significant operational and capital
expenditures. Although VCAT improves the rigid nature of a SONET/SDH circuit, it still results in
a non-optimal solution since carriers must provision circuits based on the peak rates of
customers bursty data services regardless of whether the connections are contiguous or noncontiguous. The average utilization rate may actually be much lower, resulting in stranded
bandwidth a rather undesirable suboptimal solution.
LCAS complements VCAT to overcome the loss of an entire VCG when one of the group member
paths fails by enabling a dynamic hitless addition/deletion of individual group members. Since
VCAT is essentially transparent to L2 data services, RPR can be used to overcome VCAT the
deficiencies due to its point-to-point nature. RPR enables a logical mesh L2 network topology
within a physically constructed SONET/SDH ring architecture thus enabling the leveraging of
already deployed SONET/SDH infrastructure.

Link Capacity Adjustment Scheme (LCAS)


LCAS complements VCAT to enable the dynamic resizing of VCAT connections to gracefully
handle VCG member failures (e.g. due to a fiber cut). LCAS handles the synchronization between
the source (initiating node) and the sink (terminating node) such that the size of a VCAT circuit is
dynamically and hitlessly increased or decreased as required by the application. LCAS can also
exploit protection bandwidth that is normally unused and reserved on a 1+1 redundant basis.
Reclaiming this available, but unused, protection bandwidth can lead to increased revenue
generation since lower priority bandwidth can now be sold while a protection switch has not yet
occurred.
Fortunately, LCAS is functionally located within the VCAT source and sink adaptation functions
only and is not required throughout the intermediary nodes of the network path. The dynamic
resizing of VCG connections is based upon the specific needs of the given application. Since this
dynamic resizing of the available connection bandwidth is performed in a hitless manner, it is also
transparent to the end user. For instance, if one of the VCG members is lost due to a network
failure (e.g. fiber cut scenario), LCAS will first dynamically downsize the bandwidth of the VCAT
connection by the particular capacity of the lost VCG member, and then, if pre-provisioned, will
automatically resize the VCAT connection by adding another VCG member as required. If the
addition of a new VCG member is not desired, or not possible, the overall VCAT connection
bandwidth is temporarily squelched until the network failure is corrected and the lost VCG
member is added once again to the VCG actively and hitlessly.
LCAS dynamically resizes and synchronizes a given group of VCAT members between the
source and sink network elements by using specific control packets. For higher-order VCAT, the
SONET Path H4 byte is used and for lower-order VCAT, the VT1.5 K4 byte is used both of
which implement a request and acknowledge mechanism. The particular exchange of these
control packets is performed in such a manner that the source and sink NEs actually resize a
given VCAT connection. This highly desirable feature allows service providers to dynamically
adjust the capacity required by a particular application as required which is a service in itself.
LCAS is performed unidirectionally so service providers can offer asymmetric bandwidth as in
video distribution applications. Since bandwidth usage is dynamic, service providers can offer
flexible scheduled bandwidth connectivity. For instance, data archiving bandwidth could be sold
to one customer from midnight to morning while the same bandwidth could then be resold to
another customer during normal working hours from the morning to the evening.
LCAS assumes that capacity initiation and resizing (addition/deletion) of each individual VCG
member is managed by a higher layer entity such as the EMS (Element Management System)
and/or the NMS (Network Management System). This means that a currently deployed
EMS/NMS would have to be upgraded to support LCAS if it is to gain increased commercial
acceptance. The inherent advantages of LCAS and VCAT obviate the need for upgrading already

7 of 11

White Paper
deployed OSS (Operational Support Systems) products, and initiatives to incorporate these
features are underway. LCAS, however, does not actually participate in the provisioning process
of such new VCAT circuits. The NMS/EMS initiated creation and resizing of VCAT connections
require automated signaling to the VCAT/LCAS enabled network nodes through a generalized
signaling protocol such as MPLS (Multi-Protocol Label Switching), discussed later in this paper.
Other standardized signaling and control protocols may also be used.

Resilient Packet Ring (RPR)


RPR is an access control protocol that is bandwidth and topology (ring) aware. It allows for the
implementation of shared media rings that exhibit carrier grade characteristics found in
SONET/SDH networks by effectively converting a series of point-to-point connections between
nodes into shared-media ring architectures. RPR provides efficient statistical multiplexing of
bursty data services and spatial reuse to maximize the utilization of available bandwidth, both of
which are amenable to Ethernet. There are also provisions within RPR to enable multiple Classes
of Service (CoS), which is supported using a weighted-fairness scheme in a fully dynamic
manner. The scheme is based upon the instantaneous utilization of all the users on a given RPR
thus yielding even further optimization.
RPR is actually independent of the underlying physical transport so it may be carried over
Ethernet or SONET/SDH networks. Since the GFP adaptation layer for RPR is constructed over
the SONET/SDH path layer, not over the SONET/SDH physical layer, it enables transparent
interworking with VCAT. RPR enhances the point-to-point nature of VCAT/LCAS by creating a
logical mesh topology to enhance bandwidth utilization. Although improving on the legacy
methods for transporting data services over SONET/SDH infrastructures, the combination of
VCAT and LCAS still create connections that result in stranded bandwidth particularly when there
is no data transfer between the source and sink network nodes which is quite common in many
data applications. RPR further optimizes bandwidth usage by enabling RPR nodes to detect and
reallocate idle capacity for reuse on shared ring.
Commercially, service providers may exploit the bursty nature of data services such as Ethernet
to over-subscribe available bandwidth on an RPR to realize increased revenue generation which
is, obviously, highly desired today. Service providers would not have to continue with legacy
methods of permanently and inefficiently allocating costly point-to-point bandwidth for such data
applications. RPR also allows carriers to fully leverage their existing SONET/SDH networks by
combining data and voice services within the same given SONET/SDH infrastructure.
SONET/SDH is already optimized for voice traffic transport with RPR helping to optimize bursty
data services transport simultaneously on a given SONET/SDH ring. Some of the key RPR
benefits are summarized below.
Network Resilience Network protection similar to SONET/SDH with switching times below
50msec.
Differentiated Services Supports latency-sensitive traffic via high priority packets with
minimal latency.
Spatial Reuse Contrary to traditional SONET/SDH, bandwidth is only consumed between
destination and source nodes. Data packets/frames are removed at destination nodes to
reduce unnecessary bandwidth usage.
Auto-Discovery No need to manually provision nodes that are added/removed from an RPR
since embedded auto-discovery schemes ensure that nodes are added/removed
automatically.
Scalability Supports ring topologies that are comprised of more than 100 nodes.
Robustness Nodes on the ring do not drop data packets resulting in essentially lossless
transmission.
The bandwidth allocated to an RPR may be controlled via LCAS mechanisms to dynamically
assign the required bandwidth via added/removed VCG members. LCAS is the mechanism that
enables dynamic VCG resizing but it does not possess the intelligence to decide upon the
required bandwidth. This intelligence is provided by an optical control plane that uses

8 of 11

White Paper
standardized signaling/control protocols to initiate active network reconfigurations based on
changing service demands. One such protocol is MPLS. Since RPR is actually a L2 technology, it
can also transport MPLS signaling and control information which would be oblivious to the
physical nature of the RPR with each node actually appearing to be connected in a fully meshed
configuration. RPR can use GFP or HDLC to encapsulate RPR packets over a SONET/SDH
based network infrastructure, although GFP is the preferred method since it does not suffer from
the detrimental HDLC limitations (e.g. bandwidth inflation) already discussed.
However, there are certain caveats related to RPR worth mentioning. The most important issue is
that RPR has not yet been fully standardized meaning the solutions deployed today may not be
compliant to the standards that will eventually be finalized. There is also the additional overhead
of RPR which is over and above the Ethernet overhead being encapsulated. Fortunately, this
added overhead pales in comparison to the achieved RPR benefits.

Multi-Protocol Label Switching (MPLS)


RPR complements SONET/SDH by creating a shared media ring comprising multiple nodes
coupled with efficient statistical multiplexing and aggregating bursty data services such as
Ethernet. However, RPR is only a MAC for the transport layer and does not enable fast service
provisioning. Instead, a common control plane for the service layer is required for fast and
dynamic provisioning of required data services. MPLS is a protocol that enables this control plane
and can be used to automatically provision end-to-end services using standards-based signaling
protocols such as RSVP-TE (Resource Reservation Protocol Traffic Engineering), modified to
handle MPLS traffic-engineering requirements). Using specific labels, MPLS enforces security by
strictly segregating users and services which share a common RPR physical media. Finally,
efficient traffic engineering is also achieved to ensure that service providers are consistently
optimizing the use of their prime business asset bandwidth.

Coarse Wavelength Division Multiplexing (CWDM)


The final technology in this suite of complementary metropolitan networking technologies is
CWDM. Although the above technologies together initiate, resize, transport, and optimize
bandwidth usage, the aggregate bandwidth available is ultimately dictated by the physical layer.
CWDM is a lower-cost offshoot of the high performance DWDM (Dense WDM) technologies that
have served the long-haul networking industry so well for years. DWDM targeted long-haul optical
transmission, and thus inherited the inherently high cost of highly precise optical components
namely the transmitter lasers and filters explicitly designed for long reach and tight channel
spacing.
The main difference between CWDM and DWDM is the much wider ITU-defined CWDM
wavelength spacing of 20nm (when compared to 0.8nm in DWDM) that is conducive to the use of
less expensive devices, such as uncooled DFB (Distributed Feedback) lasers and less complex
thin-film filters. These lower-cost CWDM networks come at the expense of a shorter reach and
lower overall capacity (18 wavelengths over G.652c fiber) yet is adequate for numerous metro
applications. As lower cost CWDM technologies permeate the metropolitan market place, they
will alleviate bandwidth bottlenecks still present in many locations. This bandwidth liberation will
reduce high-speed service costs, and spawn new applications to consume this new capacity in a
self-fulfilling prophecy. The core network capacity long ago exceeded metropolitan network
capacity, resulting in the current stalemate in which cost-effective high-speed access ramps to
the available core capacity are lacking. This impasse can be efficiently addressed by using
CWDM technologies to build cost-effective access ramps to this available long haul capacity.

Network Management System (NMS)


The technologies discussed above are interdependent to the extent that they must be managed
by an NMS to ensure that they can be deployed and proactively managed in real time. Without a
coherent NMS strategy, their adoption is likely to be stymied or even fail. The operations groups
of service providers must be able to remotely manage their entire network such that the inner
workings of these emerging technologies are hidden from the carrier and the end user alike.
Currently entrenched NMS software does not yet include integrated support for all of the

9 of 11

White Paper
aforementioned technologies, but initiatives are underway to accelerate their adoption due to the
obvious benefits brought forth.

Emerging Applications
There are numerous applications emerging in the metropolitan optical networking industry that
involve high-speed data services (i.e. Ethernet and Fibre Channel) requiring long reach at a cost
that is economically viable. One such application is the SAN that essentially geographically
separates the application network from the data storage network. This allows for the extension
and subsequent centralization of the data storage facilities to allow numerous physically
separated applications (e.g. transaction processing) to share the same centralized data storage
facility in real-time. The current SAN technology of choice is leaning towards Fibre Channel with
native optical reaches of 500m over multimode fiber and ~10km over single-mode optical fiber,
which although admirable, still greatly limits the addressable application space. New technologies
are required to further extend the reach of Fibre Channel reach which can be effectively
addressed using a judicious mix of GFP, VCAT, LCAS, and SONET/SDH technologies.
One specific application involving a geographic separation of computing/storage facilities is a
direct result of the recent tragic events that occurred in September 2001. It brought to the
forefront the issue of security and disaster recovery plans to ensure business continuity in the
event of a catastrophic event that could cripple and even bankrupt a business and/or industry. As
a result, the major American financial regulatory agencies (Federal Reserve and Securities &
Exchange Commission) discussed how to ensure business continuity in the event of a disaster,
and produced guidelines detailed in the Draft Interagency White Paper on Sound Practices to
Strengthen the Resilience of the U.S. Financial System [14]. One of the key suggestions is to
physically separate the duplication (standby) of critical business facilities and operations by 200300 miles, which mandates data networking technologies that today are not capable of
addressing in an economical manner. The current lack of cost-effective data services mapping
into long haul core networks is one area that has many in the industry feeling uncomfortable with
the proposed guidelines. However, the combination of the aforementioned technologies, in
particular, mapping Fiber Channel and Ethernet services onto the reach capabilities of
SONET/SDH, can help to alleviate these significant concerns.

Going Forward
Standardization of any technology quickly leads to commercially available chip sets, increased
market adoption rates, and subsequent lower costs which in turn spur on market adoption.
Ethernet is just one such example of a technology that due to its standardized nature and rapid
market acceptance, has resulted in an extremely cost-effective and ubiquitous technology. The
technologies discussed in this paper are for the most part standardized, or nearly standardized,
and will lead to the next generation of SONET/SDH networks and services. Additional data
services will continue to migrate deeper into existing SONET/SDH MANs (and WANs) thus
creating new revenue generating data services while simultaneously leveraging this huge
installed network base. The demarcation points between LANs and MANs are increasingly
blurring creating the urgent business need to marry the two technologies of choice Ethernet and
SONET/SDH. The judicious introduction of the aforementioned technologies is one available
option to ensure that these ever-blurring demarcation points are ultimately removed.

10 of 11

White Paper

Acknowledgement
The author would like to thank Thomas Barnwell for his contribution to this conference paper

References
[1]
[2]

ITU -T G.7041/Y.1303, Generic Framing Procedure, 12/2001


ITU -T G.707/Y.1322, Network Node Interface for the Synchronous Digital Hierarchy,
10/2000
[3]
ITU -T G.7042/Y.1305, Link Capacity Adjustment Scheme for Virtual Concatenated Signals,
11/2001
[4]
IEEE Draft P802.17/D2.1, Resilient Packet Ring Access Method & Physical Layer
Specifications, 02/2003
[5]
IETF Multi-Protocol Label Switching (MPLS) Standards Suite, draft-ietf-mpls-xxx-xxx-xx.txt
[6]
ITU -T G.694.2, Spectral Grid for WDM Applications: CWDM Wavelength Grid, 06/2002
[7]
ITU -T G.709, 02/2001, Interface for the Optical Transport Network (OTN)
[8]
Infiniband Architecture Specification, Volumes 1&2, Release 1.1, November 6, 2002, Final
[9]
Interfaces for CATV/SMATV Headends & Similar Professional Equipment, DVB A010, rev
1.0, May 1997
[10] IETF RFC-1661, Point to Point Protocol (PPP), July 1994
[11] IETF RFC-2615, PPP over SONET/SDH (PoS), June 1999
[12] ITU -T X.86 Y.1323, Ethernet over LAPS, February 2001
[13] IETF RFC-1662, PPP in HDLC-Like Framing, July 1994
[14] Draft Interagency White Paper on Sound Practices to Strengthen the Resilience of
[
the U.S. Financial System, Docket No.R-1128, Board of Governors of the Federal Reserve
System. Washington D.C.20551

11 of 11

You might also like