You are on page 1of 95

Understanding Synchronization Protection Basics in Transmission network

Let us consider a ring network. Normal synchronization works around a ring. In this case, Nodes B-F are line timed. Node A is timed to an external reference. When a sync source is failed, new time source should be selected in a reasonable amount of time If synchronization is not restored , BER will be increasing through time.

SPS Timing Loops (SPS = Synchronization Protection Switching): During a ring failure, simple reference switching would result in timing loops as shown below.

Operations Normal Flow :


Let us read about the operation. Following diagram shows the normal flow of operation. Synchronization messaging in normal operation. S1 = Stratum 1 Traceable DU = Dont Use

HO = Holdover

Operations Fiber Cut:


In the ring, if fiber cuts between B & C, Node C goes into short term holdover as shown below.

Then, Node F switches to timing from Node A as shown below.

Finally ring is reconfigured and all nodes are again synchronized to BITS as shown below.

Please let me know if any clarification required :)

Basic SDH Network Topology & Advantages of SDH


Let us read about the Basic SDH network topology. Detailed topology discussion will be done later.
Basic SDH Network Topology

SDH networks are usually deployed in protected rings. This has the advantage of giving protection to the data, by providing an alternate route for it to travel over in the event of equipment or network failure. Each side of the ring (known as A and B, or sometimes, East and West), consists of an individual transmit and receive fibre. These fibres will take diverse physical paths to the distant end equipment to minimise the risk of both routes failing at the same time. The SDH equipments have the ability to detect the problem and will automatically switch to the alternate route.

SDH multiplexers transmit on both sides of the ring simultaneously, But to speed up switching times, they only receive on one side at any time . This means that only the receiving end needs to switch, thus reducing the impact of a fault on the customers' data.
Features and Advantages of SDH

In previous post we have seen the limitations of PDH. Now let us see the advantages of SDH. SDH permits the mixing of the existing European and North American PDH bit rates. All SDH equipment is based on the use of a single master reference clock source & hence SDH is synchronous. Compatible with the majority of existing PDH bit rates SDH provides for extraction/insertion, of a lower order bit rate from a higher order aggregate stream, without the need to de-multiplex in stages. SDH allows for integrated management using a centralised network control. SDH provides for a standard optical interface thus allowing the inter-working of different manufacturers equipment. Increase in network reliability due to reduction of necessary equipment/jumpering.

Scrambling SDH signal, why scrambler is used in SDH?


In SDH/SONET system, receivers recover clock based on incoming signal. Insufficient number of 0-1 transitions causes degradation of clock performance. In order to avoid this problem and to guarantee sufficient transitions, SONET/SDH employ a scrambler. All data except first row of section overhead is scrambled . Scrambler is 7 bit self7 6 synchronizing X + X + 1 . Scrambler is initialized with ones

This type of short scrambler is sufficient for voice data. But this is not sufficient for data which may contain long stretches of zeros. So, while sending data an additional payload scrambler is used. This modern standards use 43 bit X + 1. It run continuously on ATM payload bytes (suspended for 5 bytes of cell tax) . Run continuously on HDLC payloads
43

Scrambler :

Types of switching in SDH Rings.


SPAN SWITCHING & RING SWITCHING

Span switching :
This type of switching uses only protection fibers on the span where fault is detected.

Ring switching :
In this type of switching, traffic is switched away from filed span to adjacent node via the protection fibers on the long path.
REVERTIVE & NON-REVERTIVE SWITCHING

We can implement two modes of protection switching in SDH networks, revertive or nonrevertive.

In revertive switching, once the fault condition has cleared, the network enters a "wait to restore state. One the configured WTR time is elapsed, traffic will be switched back to main path. This will be useful, wjen main path is much shorter than protection path. In non-revertive mode, even after clearance of fault condition, traffic will not switch back to main path automatically.

How Protection Switching is implemented in SDH?


For protection switching, mainly K1, K2 bytes and B2 bytes in Multiplex Section Overhead of SDH frame are used. Normally K bytes carried in protection fiber are used to carry APS protocol. B2 bytes contain bit interleaved parity check of the previously transmitted MSOH plus the VC-n payload.

K1/K2 Byte strycture:

K1/K2 byte structure is as shown in above diagram. Upto maximum 16 nodes can be supported in a SDH ring with protection. This is because, only 4 bytes are used for source and destination ID. In 4F-rings, APS protocol is only active on the protection fibers. APS protocol is optimised for AU level of operation. Each node in the ring should be configured with a ring map. This ring map contains information about the channels that node handles. Also, Each node in the ring is given a unique Id number within the range 0 to 15. At any point of time, each node will be knowing the current status of the ring ( normal or protected). When the protection switches are not active, each node sends K-bytes in each direction indicating "no bridge request". At the time of failure in the ring between two adjacent nodes, two paths may exist for communication. Short path is the one, which directly connects both the nodes. Longer path connects these two nodes via all other nodes on the ring. When a node receives a non-idle K-byte message containing a destination ID of another node on the ring, that node will change to pass through mode. Let us read about types of ring protection in next post

BLSR,Bi-directional Line Switched ring


There are two types of BLSR deployed in various networks. i. 2-fiber BLSR ii. 4-fiber BLSR

2-fiber BLSR:
This system is also known as two fiber multiplex-section shared protection ring. Here, service traffic flows bi-directionally. Both the fibers carries service and protection channels.

When the protection channels are not required, they can be used to carry extra traffic, but at the time of protection switching, this extra traffic is dropped. Only ring switching is supported by this architecture. At the time of ring switching, thosechannels carrying service traffic are switched to the channels that carry the protection traffic in the opposite direction.

4-Fiber BLSR:
This system is also known as four-fiber multiplex-section shared protection ring. This is the most robust ring architecture. This is most expensive to implementbecause of the extra optical hardware required.

In this system, bi-directional pairs of fibers are used to connect each span in the ring. One bidirectional pair carries theworking channels, while the other pair carries protection channels. 4F-BLSR supports both span switching and ring switching. ( but both not at the same time). Multiple span switches can coexist on the ring. This is because, only the protection channels along one span are used for each span switch.

What triggers a protection?


Protection switching is triggered in following cases. 1. Signal Fail , detected as Loss of Signal (LOS) at receiver input. This may be due to faulty hardware in the upstreamnetwork equipment or due to broken fiber. 2.Signal degrade, this is monitored by monitoring B2 bytes. In next post let us read about : How is protection switching implemented in SDH ( K1 & K2 Bytes)

Ring networks - G.841 - Interview notes for UPSR


Further to Linear protection, let us read about ring protection. ITU-T recommendation covers several types of ringnetwork architectures. Ring protection switching can be implemented at path level or at line level. Rings can be uni-directional or bi-directional. & they may utilise 2-fiber or 4-fibers.

UPSR : Uni-directional Path Switched Ring :


In a uni-directional ring, service traffic flows in one direction. ( clockwise in below diagram). Protection traffic flows in opposite direction (counter clockwise)

In this example, traffic from C to B travels in clockwise direction via A. Traffic from B to C travels directly in clockwise direction. This configuration is also known as multiplex section dedicated protection ring. This is because, one fiber carries service traffic, while the other is dedicated to protect the main path.All traffic is added in both directions. Decision as to which to use at drop point (no signaling). Normally non-revertive, so effectively two diversity paths Main advantage of this configurations are : single ended switching, simple to implement and does not require any protocol. Single ended switching is always faster while compared to dual ended switching. Chances of restoring traffic under multiple fail conditions is high. Also, implementation of this architecture is least expensive. However this arcitecture is Inefficient for core networks. There is no spatial reuse. Node needs to continuously monitor every tributary to be dropped. In next post let us read about BLSR - Bi-directional line switched ring

Traffic Protection on SDH Optical Networks, Interview notes for SDH protection
Service survivability has become more important than ever. This is because telecommunication is used increasingly for vital transactions such as electronic fund transfer, order processing, inventory control & many other business activities ( e.g : e-mail, internet access). Users are willing to pay more to get guaranteed service. In SDH transmission system, Automatic Protection Switching ( APS) algorithms and performance/alarm monitoring are built in. This system allows the construction of linear point-to-point networks and synchronous ring topology networks which are self- healing in the event of failure. Also, to minimize the disruption of traffic, the protection switching must be completed within the specified time limit ( sub 50ms) recommended by ITU-T G.783 (linear networks) and ITU-T G.841 (ring networks).

Upon detection of a failure (dLOS, dLOF, high BER), the network must reroute traffic (protection switching) fromworking channel to protection channel. The Network Element that detects the failure (tail-end NE) initiates the protection switching. The head-end NE must change forwarding or to send duplicate traffic. Protection switching may be revertive (automatically revert to working channel)

Key ITU-T recommendations :


ITU-T recommendations define methods of protecting service traffic in SDH networks. Two important recommendations are : 1.Recommendation G.783 covers linear point to point networks. 2.Recommendation G.841 covers various configurations of multiplex section rings.

Linear ( point to point) protection :


In a linear network, protection is achieved through an extra protection fibre. It can protect the network from fiber or NE card failure. Different variants of linear protection are 1+1, 1:1 and 1:N. How it works ? Head-end and tail-end NEs have bridges (muxes). Head-end and tail-end NEs maintain bidirectional signaling channel. Signaling is contained in K1 and K2 bytes of protection channel. K1 tail-end status and requests. K2 head-end status .

Linear 1+1 protection :


This is simplest form of protection. Can be at OC-n level (different physical fibers) or at STM/VC level (called SubNetworkConnection Protection) or end-to-end path (called trail protection) Head-end bridge always sends data on both channels. Tail-end chooses channel to use based on BER, dLOS, etc. No need for signaling. For non-revertive cases, there is no distinction between. working and protection channels. BW utilization is 50%.

Linear 1:1 protection :


In this case, Head-end bridge usually sends data on working channel. When tail-end detects failure it signals (using K1) to head-end. Head-end then starts sending data over protection channel. When not in use, protection channel can be used for (discounted) extra traffic (pre-emptible unprotected traffic).

Linear 1:N protection:


This is verymuch similar to 1:1 protection with a small difference. Here, in order to save BW we allocate 1 protection channelfor every N working channels. Here, N limited to 14.

Let us read about ring networks in next post.

Tributary Unit (TU) Frames, Interview notes on Tributary Unit frames in SDH
Different sizes of Tributary Unit frames are used in SDH & we have seen basic SDH multiplexing structure in earlier post. Different TU-Sizes provided in SDH are TU-11, TU-12. TU-2 and TU-3 . 1. TU-11 : A TU-11 frame consists of 27 bytes, structured as 3 columns of 9 bytes. These 27 bytes provide a transportcapacity of 1.728 Mbps at a frame rate of 8000 Hz. It will accommodate the mapping of 1.544 Mbps DS1 signal. 84 TU-11's can be multiplexed into a STM-1 frame. Structure as as shown below.

2.TU-12 : A TU-12 frame consists of 36 bytes, structured as 4 columns by 9 bytes.These 36 bytes provide a transportcapacity of 2.304 Mbps at a frame rate of 8000 Hz. It will accommodate the mapping of 2.048 Mbps E1 signal. 63 TU-12's can be multiplexed into a STM-1 frame. Structure as as shown below.

3.TU-2 : A TU-2 frame consists of 108 bytes, structured as 12 columns by 9 bytes.These 108 bytes provide a transportcapacity of 6.912 Mbps at a frame rate of 8000 Hz. It will accommodate the mapping of DS2 signal. 21 TU-2's can be multiplexed into a STM-1 frame. Structure as as shown below.

4.TU-3 : A TU-3 frame consists of 774 bytes, structured as 86 columns by 9 bytes.These 36 bytes provide a transportcapacity of 49.54 Mbps at a frame rate of 8000 Hz. It will accommodate the mapping of 34 Mbps E3 signal and North American DS3 signal. 3 TU-3's can be multiplexed into a STM-1 frame. Structure as as shown below.

Maintenance signals in SDH & abbreviations


Let us read about maintenance signals in SDH.

LOS Drop of incomming optical power level causes BER of 103 or worse OOF A1, A2 incorrect for more than 625 us LOF If OOF persists of 3ms B1 Error Mismatch of the recovered and computed BIP-8

MS-AIS K2 (bits 6,7,8) =111 for 3 or more frames B2 Error Mismatch of the recovered and computed BIP-24 MS-RDI If MS-AIS or excessive errors are detected, K2(bits 6,7,8)=110 MS-REI M1: Binary coded count of incorrect interleavedbit blocks AU-AIS All "1" in the entire AU including AU pointer AU-LOP 8 to 10 NDF enable or 8 to 10 invalid pointers HP-UNEQ C2="0" for 5 or more frames HP-TIM J1: Trace identifier mismatch HP-SLM C2: Signal label mismatch HP-LOM H4 values (2 to 10 times) unequal to multiframesequence B3 Error Mismatch of the recovered and computed BIP-8 HP-RDI G1 (bit 5)=1, if an invalid signal is received in VC-4/VC3 HP-REI G1 (bits 1,2,3,4) = binary coded B3 errors TU-AIS All "1" in the entire TU incl. TU pointer TU-LOP 8 to 10 NDF enable or 8 to 10 invalid pointers LP-UNEQ VC-3: C2 = all "0" for >=frames; VC-12: V5 (bits 5,6,7) = 000 for >=5 frames LP-TIM VC-3: J1 mismatch; VC-12: J2 mismatch LP-SLM VC-3: C2 mismatch; VC-12: V5 (bits 5,6,7) mismatch BIP-2 Err Mismatch of the recovered and computed BIP-2 (V5) LP-RDI V5 (bit 8) = 1, if TU-2 path AIS or signal failure received LP-REI V5 (bit 3) = 1, if >=1 errors were detected by BIP-2 LP-RFI V5 (bit 4) = 1, if a failure is declared Abbreviations : AU HP LOF LOM LOP LOS LP OOF REI RDI RFI Administration unit High path Loss of frame Loss of miltiframe Loss of pointer Loss of signal Low path Out of frame Remote error indication (FEBE) Remote defect indication (FERF) Remote failure indication

SLM Signal label mismatch TIM Trace identifier TU Tributary unit UNEQ Unequipped VC Virtual Container C container

Detailed study of multiplexing process in SDH - Interview notes - Part III


Let us continue from previous post, where we studied about multiplexing of VC-12 into VC4. In this post, you will read about VC-4 Path overhead and Mapping of VC-4 into STM-1 frame.

VC-4 Path Overhead:


The VC-4 Path Overhead forms the start of the VC-4 payload area and consists of one whole column of nine bytes as shown below. The POH contains control and status messages (similar to the V5 byte) at the higher order.

J1 - Higher Order Path trace. This byte is used to provide a fixed length user configurable string, which can be used to verify the connectivity of 140 Mbit/s connections. B3 - Bit Interleaved Parity Check (BIP-8). This byte provides an error monitoring function for the VC-4 payload. G1 - Higher Order Path Status. This byte is used to transmit back to the distant end, the results of the BIP-8 check in the B3 byte K3 -Automatic protection Switching (APS). K3 provides for automatic protection switching control with VC-4 payloads. Similar to the K4 bits in the 2 Mbit/s overheads

Mapping of a VC-4 into an STM-1 frame.

An AU pointer is added to the VC-4 to form an AU-4 or Administrative Unit -4. The AU pointers are in a fixed position within the STM-1 frame and are used to show the location of the first byte of the VC-4 POH. The AU-4 is then mapped directly into an AUG or Administrative Unit Group, which then has the Section Overheads or SOH, added to it. These section overheads provide STM-1 framing, section performance monitoring and other maintenance functions pertaining to the section path.

The VC-4 payload, plus AU pointers and Section Overheads, together form the complete STM-1 transport frame. If you have any question , please write to me

Detailed study of multiplexing process in SDH - Interview notes - Part II


Further to previous post, let us read about mapping VC-12 into TU-12, TU-12 to TUG-2, TUG-2 to TUG-3 & TUG-3 to VC4.

Mapping of a VC-12 into a TU-12 signal.


In order to detect the start of the 2 Mbit/s signal and thereby the start of the customers data, The V5 byte must be seen be the distant end. This is achieved by adding four overhead bytes to the multiframe, which together form a calculated byte count to the start of V5. This is called a pointer value and is known as the TU Pointer. There are four pointer bytes called V1, V2, V3 and V4, which are used to calculate the location of V5.

Multiplexing of TU-12 into a TUG-2:

Each VC-12 consists of 36 bytes of information and these 36 bytes fill up exactly 4 columns of the STM-1 frame. 3 separate TU-12's are directly mapped together to form a TUG-2.

The 3 TU-12's will fit exactly into 12 columns of the STM-1 frame .

Mapping of a TUG-2 into a TUG-3 signal:


The mapping of TUG-2's into TUG-3's uses fixed column interleaving and is shown below.

Mapping of a TUG-3 into a VC-4 signal: The three TUG-3's are column interleaved to form the VC-4 payload. At this point two columns of fixed stuffing are added and the 'VC-4 Path Overhead' is added to the start.

These are the steps followed in multiplexing VC-12 into VC-4. In next post, we will read about VC-4 Path over head

If you have any questions, you can add them in comments section. I will provide you the answer.

Detailed study of multiplexing process in SDH - Interview notes


Let us study the overview of the process followed by a 2 Mbit/s PDH input signal until it becomes part of an STM-1 frame. You can compare the details of each individual stage to SDH multiplexing structure provided in previous post.

Mapping of a 2 Mbit/s PDH signal into a C-12:


The 2 Mbit/s PDH input signal is mapped into a Container 12 (C-12). The input frame consists of 32 bytes of information and this fits directly into the C12 as shown

Mapping of a C-12 into a VC-12:


At the time of mapping a C-12 into a VC-12, we need to add four bytes of overhead control information. But we can add only one byte per frame of customers' data So, this process takes place over 4 consecutive frames & described below: Frame number One has two bytes of fixed stuffing added to it. One byte is added at the start and one byte at the end. One byte of overhead control information added to the start. This byte of over head is called the V5 byte and is known as the VC-12 Path OverHead (POH).

Two more important features of the V5 byte are: BIP-2 is Bit Interleaved Parity Check-2. This looks at the data in the C-12. It counts all of the binary one's that it sees in theodd bit positions (i.e. bits 1,3,5,7 etc) and then it counts all of the binary one's that it sees in the even bit positions (i.e. bits 2,4,6,8 etc). This BIP-2 is then recalculated at the distant end. If the count is different, then some bit corruption has occurred. FEBE is Far End Bit errors. This bit is set correspondingly to the result of the BIP-2 check. If errors are received at the distant end then there needs to be a mechanism for informing the sender of the problem. Frame number Two has two bytes of fixed stuffing added to it. One byte is added at the start and one byte at the end. It then has one byte of overhead control information added to the start. This control byte in frame 2 is the Lower Order Path Trace or J2 byte. J2 is used to check continuity of a 2 Mbit/s path.

Frame number Three has two bytes of fixed stuffing added to it. One byte is added at the start and one byte at the end. One byte of overhead control information added to the start. This control byte N2, in frame 3 is called the Network Operatoror Tandem Control byte. N2 is used to transmit performance-monitoring information where the circuit spans differing vendors networks.

Frame number Four has one byte of fixed stuffing added to the end. It also has one byte of variable stuffing added to the start. One byte of overhead control information added to the start. Control byte in frame 4 is called K4 and it is used for 2 Mbit/s Automatic Protection Switching or APS. APS is used to automatically switch a single 2 Mbit/s circuit to its alternate path if a fault condition occurs.

In the next post we will learn about : Mapping of a VC-12 into a TU-12 signal.

SDH Concatenation, Interview notes on Contiguous concatenation


There are two types of concatenation in SDH. They are Contiguous concatenation and Virtual concatenation. In this article, let us learn about contiguous concatenation.

Contiguous Concatenation :
The SDH frame can be thought of as transport lorry. The data to be transported is placed in the VC-4 'Container'. This is then hitched to the SOH 'Cab unit' that 'drives' the data to its destination.The maximum carrying capacity of the vehicle is determined by the size of the 'container'. Therefore although the SDH signal is 155 Mbit/s in size, the largest single circuit that can be transmitted at any one time by the customer is limited to the size of the VC-4 i.e. 140 Mbit/s.

When using higher rates of SDH (STM-4, STM-16 etc), multiple 'containers' and 'cabs' are added one after another, to form a bigger vehicle. The customer is still limited to a single circuit size of 140 Mbit/s however, because each individual 'container' is still the same size (140 Mbit/s). They can however transmit multiple 140 Mbit/s circuits simultaneously. Standard STM-4 structure is given below

The limitation of 140 Mbit/s per individual circuit is not a efficient way of managing bandwidth. In order to overcome this limitation, a method of combining 'containers' together has been developed which is called 'Concatenation'. STM-4 concatenated structure (VC-4-4C) is as shown below

Concatenated paths are commonly defined as VC-4-xC circuits (where x is size of the concatenation), as shown below: STM-4 concatenation (written as VC-4-4c), provides a single circuit with a bit rate of approximately 600M (actually 599.04 Mbit/s). STM-16 concatenation (written as VC-4-16c), provides a single circuit with a bit rate of approximately 2.2G (actually 2.2396160 Gbit/s).

STM-1 Frame Structure & Section Over head


STM-1 Frame Structure

STM-1 frame contains 2430 bytes of information. Each byte contains 8 data bits (i.e. a 64kbit/s channel). Duration of STM-1 transport frame is 125ms. The number of frames per second is 1 second / 125ms = 8000 Frames per second. So, rate of STM-1 frame is calculated as follows: 8 bits x 2430 bytes x 8000 per second = 155,520,000 bits/s or 155 Mbit/s.

STM-1 frame chopped up into 9 segments, stacked on top of each other as shown in the diagram below. The bits start at the top left with byte number one and are read from left to right and top to bottom. They are arranged as 270 columns across and 9 rows down.

STM-1 Section Overheads

The STM-1 Section Overhead (SOH) consists of nine columns by nine rows as shown below. It forms the start of the STM-1 frame.The SOH contains control and status messages at the optical fibre level. First three rows are RSOH ( Regenerator Section Overhead), Fourth row is AU-4 pointer. Fifth to Ninth row are MSOH ( Multiplexer section Overhead).

A1 & A2 - STM-1 Frame Alignment. These 6 bytes are used for STM-1 frame alignment. They are the first bytes transmitted. Frame alignment takes place over three STM-1 frames. J0 - STM-1 Section Path Trace. This byte is used to provide a fixed length user configurable string, which can be used to verify network topology connections. B1 - Byte Interleaved Parity Check 8 (BIP-8). This byte provides an error monitoring function for the entire STM-1 frameafter encoding. B2 - Byte Interleaved Parity Check 24 (BIP-24). These 3 bytes provide an error monitoring function for the STM-1 framebefore encoding.A comparison between the BIP-8 and BIP-24 checks reveal if there were any encoding errors.

D1 to D12 - Data Communications Channel (DCC). These bytes provide a data channel for the use of network management systems. K1 - Automatic protection Switching (APS). This byte is used to perform automatic protection switching of the optical fibre.

X - Reserved. These bytes are reserved for national use.


All Unmarked bytes are reserved for future international standardisation.

SDH Principles and Interview questions on SDH Multiplexing structure


Overview

The SDH standard defines a number of 'Containers' each corresponding to an existing PDH input rate. Information from the incoming PDH signal is placed into the relevant container.Each container then has some control information known as the 'Path Overhead' (POH) and stuffing bits added to it. The path overhead bytes allow the system operator to achieve end to end monitoring of areas such as error indication, alarm indication and performance monitoring data. The container and the path overhead together form a 'Virtual Container' (VC). Due to clock phase differences, the start of the customers' PDH data may not coincide with the start of the SDH frame. Identification of the start of the PDH data is achieved by adding a 'Pointer'. The VC and its relevant pointer together form a'Tributary Unit' (TU). Tributary units are then multiplexed together in stages (Tributary User Group 2 (TUG2) - Tributary User Group 3(TUG-3) - Virtual Container 4 (VC-4)), to form an Administrative Unit 4 (AU-4). Additional stuffing, pointers and overheads are added during this procedure.This AU-4 in effect contains 63 x 2 Mbit/s channels and all the control information that is required. Finally, Section Overheads (SOH) are added to the AU-4.These SOH's contain the control bytes for the STM-1 section comprising of framing, section performance monitoring, maintenance and operational control information.An AU-4 plus its SOH's together form an STM-1 transport frame.

Graphical SDH Multiplexing Structure

Full SDH Multiplexing Structure :

Diagram below shows full SDH Multiplexing structure. PDH signals enter on the right into

the relevant container and progress across to the left through the various processes to form the STM frame.

2 Mbit/s Multiplexing Structure

Let us see the multiplexing stages of 2 Mbit/s circuit. The relative bit rate and process is shown for eachstage.

If you like this post, please share the same with your friends also

STM-1 Frame Structure & Section Over head

STM-1 Frame Structure

STM-1 frame contains 2430 bytes of information. Each byte contains 8 data bits (i.e. a 64kbit/s channel). Duration of STM-1 transport frame is 125ms. The number of frames per second is 1 second / 125ms = 8000 Frames per second. So, rate of STM-1 frame is calculated as follows: 8 bits x 2430 bytes x 8000 per second = 155,520,000 bits/s or 155 Mbit/s. STM-1 frame chopped up into 9 segments, stacked on top of each other as shown in the diagram below. The bits start at the top left with byte number one and are read from left to right and top to bottom. They are arranged as 270 columns across and 9 rows down.

STM-1 Section Overheads

The STM-1 Section Overhead (SOH) consists of nine columns by nine rows as shown below. It forms the start of the STM-1 frame.The SOH contains control and status messages at the optical fibre level. First three rows are RSOH ( Regenerator Section Overhead), Fourth row is AU-4 pointer. Fifth to Ninth row are MSOH ( Multiplexer section Overhead).

A1 & A2 - STM-1 Frame Alignment. These 6 bytes are used for STM-1 frame alignment. They are the first bytes transmitted. Frame alignment takes place over three STM-1 frames. J0 - STM-1 Section Path Trace. This byte is used to provide a fixed length user configurable string, which can be used to verify network topology connections. B1 - Byte Interleaved Parity Check 8 (BIP-8). This byte provides an error monitoring function for the entire STM-1 frameafter encoding. B2 - Byte Interleaved Parity Check 24 (BIP-24). These 3 bytes provide an error monitoring function for the STM-1 framebefore encoding.A comparison between the BIP-8 and BIP-24 checks reveal if there were any encoding errors.

D1 to D12 - Data Communications Channel (DCC). These bytes provide a data channel for the use of network management systems. K1 - Automatic protection Switching (APS). This byte is used to perform automatic protection switching of the optical fibre.

X - Reserved. These bytes are reserved for national use.


All Unmarked bytes are reserved for future international standardisation.

TDM : Positive Justification in PDH


Let us read the concept of positive justification.

The diagram above illustrates the basic principle of positive justification. There are 4 asynchronous inputs. All are brought to same frequency ( i.e.36 bps) by adding appropriate number of redundant bit to each tributary. Now all these 4 synchronous 36 bps inputs are multiplexed to get the output rate of 144 bps.

Revrse of this process takes place at the demultiplexer. From each tributary signals, redundant bits are removed to recover the original signal. These redundant bits are called stuffing or justification bits. The higher order stream will be having frame structure and framing bits so that interleaved tributary bits can be recovered. If you like this post, please take 5 seconds to share this on web

2 Mbps FRAME - FORMAT


Let us study about standard 2 Mbps frame format G.704 / G.732.

Each 2 Mbps frame contains 256 bits ( 32 timeslots) at a repetition rate of 8 kb/s. The first timeslot i.e.TS 0 is reserved for framing, error-checking and alarm signals. Remaining 31 channels can be used for data traffic. Individual timeslots / channelscan be used for 64 kbps PCM. Sometimes TS16 is reserved for signalling. For example - ISDN primary rate D channelsignalling (Q.931). The start of 32 timeslot frame is signified by the frame alignment word 0011011 in TS0 of alternate frames. In the other frame, bit 2 is set to one and bit 3 contains the A-bit for sending alarm to the far end. If three frame alignment words in four are received in error, then the receiving terminal declares loss of frame alignment and initiates a resync process. If you like this post, please take 5 seconds to share this on facebook & google +

Basic SDH Network Topology & Advantages of SDH


Let us read about the Basic SDH network topology. Detailed topology discussion will be done later.
Basic SDH Network Topology

SDH networks are usually deployed in protected rings. This has the advantage of giving protection to the data, by providing an alternate route for it to travel over in the event of equipment or network failure. Each side of the ring (known as A and B, or sometimes, East and West), consists of an individual transmit and receive fibre. These fibres will take diverse physical paths to the distant end equipment to minimise the risk of both routes failing at the same time. The SDH equipments have the ability to detect the problem and will automatically switch to the alternate route.

SDH multiplexers transmit on both sides of the ring simultaneously, But to speed up switching times, they only receive on one side at any time . This means that only the receiving end needs to switch, thus reducing the impact of a fault on the customers' data.
Features and Advantages of SDH

In previous post we have seen the limitations of PDH. Now let us see the advantages of SDH. SDH permits the mixing of the existing European and North American PDH bit rates. All SDH equipment is based on the use of a single master reference clock source & hence SDH is synchronous. Compatible with the majority of existing PDH bit rates SDH provides for extraction/insertion, of a lower order bit rate from a higher order aggregate stream, without the need to de-multiplex in stages. SDH allows for integrated management using a centralised network control. SDH provides for a standard optical interface thus allowing the inter-working of different manufacturers equipment. Increase in network reliability due to reduction of necessary equipment/jumpering.

Origin of SDH
As seen from the previous post about PDH, PDH is a workable but flawed system.At the beginning it was the bestavailable technology and was a giant leap forward in telecom transmission, As a result of growth in the field of silicon chips and integrated microprocessors, customer demand soon provided the need to introduce a new and better system.& it was expected to solve the existing limitations of PDH. As a next step, Bellcore introduced SYNTRAN (Synchronous Transmission) system. However this was only a development system. Soon it was replaced with SONET (Synchronous Optical Network).Initially SONET could only carry the ANSI (American National Standards Institute) bit rates i.e. 1.5, 6, 45 Mbit/s. Aim of the project was to provide easier international interconnection, Hence, SONET was modified to carry the European standard bit rates of 2, 8, 34 & 140 Mbit/s.

In 1989 the ITU-T (International Telecommunications Union Telecommunication's standardisation section), published recommendations which covered the standards for SDH. These were adopted inNorth America by ANSI (SONET is now thought of as a subset of SDH), making SDH a truly global standard

PDH Basics | PDH Interview questions


Before 1970, worlds telephony systems were based on single line, voice frequency, and all connections were over twisted copper pair. During early 1970s digital transmission systems began to appear using Pulse Code Modulation (PCM). PCM enables analogue waveforms such as speech to be converted into a binary format suitable for transmission over long distances via digital systems. PCM works by sampling the analogue signal at regular intervals, assigning a binary value to the sample and then transmitting this value as a binary stream. This process is still in use today and forms the basis of virtually all the transmission systems that we currently use.

Next step was multiplexing several PCM together over the same copper pair. A standard was adopted in Europe where thirty-two, 64kbit/s channels were multiplexed together to produce astructure with a transmission rate of 2.048 Mbit/s (usually referred to as 2 Mbit/s). As demand for telephony services grew, Four X 2 Mbit/s signals were combined together to form an 8 Mbit/s signal (actually 8.448 Mbit/s). , This is because 2 Mbit/s signal was not sufficient to cope with the demands of the growing network, and so a further level of multiplexing was devised. Further, additional levels of multiplexing structure were added to include rates of 34 Mbit/s (34.368) and 140 Mbit/s (139.264). These transmission speeds are called Plesiochronous Digital Hierarchy or PDH rates. PLESI means Similar

CHRONOUS means timing. This means that PDH equipment operates with similar timing but not exactly the same timing.

Comparison of hierarchical PDH rates :


A different hierarchical structure was adopted in North America. Comparison between two systems are given below.

Disadvantages of PDH networks


PDH signal is structured in such a way that, it is impossible to extract a single 2 Mbit/s signal from within a higher order (say 140 Mbit/s) stream. In order to cross connect 2 Mbit/s signal between one transmission system and another, it must be de-multiplexed back down to its primary rate first. This forms a multiplexer mountain.

So, we need to have a lot of equipments just to connect 2 Megs together. Due to this :

More usable space & power is taken up in racks in node sites by these equipment mountain & cause more maintenance-associated problems.

Equipment in different hierarchical levels synchronised from a different source and at a different rate, which may lead to clocking problems that can cause errors.

For a simple 2 Meg signal, jumpering needs to be done at all levels, that make up the individual transmission system. This leads to large amounts of physically bulky coax wiring.

Efficient use of bandwidth is achieved in PDH due to the fact that, it is having small overhead. But this limits the management ability of PDH. :

Automatic storage of route information is not available which leads to the requirement of accurate paper records to avoid problems.

It is not possible to remotely configure equipment and the alarm monitoring is only limited to reporting loss of inputs.

Protection of the transmission paths is generally using 1+1 protection and available at the higher PDH levels i.e.140 Mbit/s and above only.

Summary of PDH Limitations


Interconnection between different national systems were difficult (European/North American).

Clocking in different hierarchy levels are done individually, so slips possible.

'Multiplexer mountain' is costly and inflexible.

Limited management functionality

Path Protection available at higher rates only.

While comparing to today standards, more Prone to faults.

All these systems works fine in a stand-alone hierarchy. But it does make international interconnection very difficult and costly. This was the major reason for the development of a new internationally agreed standard (SDH).

Network Management Basics

Operational Tasks:
Following basic operational tasks are performed by network management system: Protection : Protection switching takes place within milliseconds ( sub 50 ms) & hence Circuit recovery in milliseconds ( failure should not be detected by voice customers) Restoration: By doing manual configuration, circuit recovery achieved in seconds or Provisioning: Allocation of capacity to preferred routes (according to certain time schedules) Consolidation: Moving traffic from unfilled bearers onto fewer bearers to reduce waste trunk capacity Grooming: Sorting of different traffic types from mixed payloads into separate destinations for each type of traffic.

OAM Functions and Layers


Level 1 - Regenerator Section: Loss of synchronization, signal quality degradation Level 2 - Multiplex Section : Loss of frame synchronization, degraded error performance Level 3 Path : Assembly and disassembly, cell delineation control.

Data Communication Channel (DCC)


DCC is a in-band channel to facilitate communication between all Network Elements (NE) in a network. This facilitates remote login, alarms reporting, software download, provisioning

Reference Clocks & alternative clock source in SDH network.


Reference Clocks:
Let us read about reference clocks. Precision of internal clock is classified into so called Stratum levels. Accuracy ofreference clock is defined as the ratio of bit slip happening (causing a bit error)

Stratum 1 => 1 x 10 (synchronization to atomic clock) -9 Stratum 2 => 1.6 x 10 -6 Stratum 3E => 1 x 10 -6 Stratum 3 => 4.6 x 10 -6 Stratum 4 => 32 x 10 (typical for IP routers) When we are distributing the clock in the network, accuracy level might decrease at each hop in clock distribution. Originally providing Stratum 1 clocks for each network element was far from being economical, even providing this service at multiple locations was too much demanding. So clock distribution methods were developed to minimize the number of high accuracy clocks needed in the network. Global Positioning System (GPS) includes Stratum 1 atomic clocks on the satellites. Cheap GPS receivers are available in the market and they make it possible to have a Stratum 1 time source at almost any place. This reduces the need for time synchronization network (might even go away in the future).
Clock Distribution Methods :

-11

Various clock distribution methods are as described below. When all equipment is at the same location, External clock input might be used. This is usually BITS = Building Integrated Timing Signal. It uses an empty T1 or E1 framing to embed clock signal. Might be provided as a dedicated bus reaching into each rack in a CO environment. BITS should be generated from a Stratum 1 clock. Typically it will be deployed with a hot spare alternative source for fail-over. Network elements not close to a BITS source should recover clock from the line. While distributing the clock, Clock distribution network should not have loops, so a tree distribution topology should be configured. Usually carrier networkelement will have Stratum 3 accuracy when running free. By synchronization to the reference clock, this clock is running at the same rate as the reference clock (that is Stratum 1). Minimum requirement for any network element is 20 ppm (that is between Stratum 3 and Stratum 4).

Alternative Clock Sources:


If the trail to the reference clock source is lost, the network element still continues normal operation. However, alarm might be generated. After some time the clock might drift away so much, that bit errors would occur. Some time is left for switching over to an alternative clock source. Then the network element gets into a holdover state. Requirement is to have less than 255 errors in 24 hours. A hierarchy of potential clock sources should be configured at each network element to achieve a high-availability operation. Typically a maximum 3 alternative time reference sources might be configured. This is meaningful only if there are different paths to the alternative time reference sources. If only one natural path exists to a single time reference source, then the path must be protected by automatic protection switching. This requires some extra signaling to do it properly, called SPS = Synchronization Protection Switching.

Synchronization requirements & modes of timing in SDH transmission networks


For synchronization of a transmission network, Frequency variation of bits transmitted should be inside the limits determined by the next hops ability to transmit these bits further. Stuffing allows for some limited tolerance. In order to guarantee a low level of BER Frequencies should be synchronized all over the network. Usually Synchronization is done by recovering the embedded clock signal from the input signal . Synchronization source should have a very

precise clock (reference clock). Reference clock might be reached only by multiple hops, but number of hops should be minimized.

Synchronization modes for transmission networks:


In a transmission network, Each network element has to be configured for time synchronization. Time reference distribution should minimize delay. Various timing alternatives available are: External Line Loop Through.

Let us see the details . External timing: In this mode, all signals transmitted from a node are synchronized to an external source
received by that node; i.e. BITS timing source.

Line Timing :
In this mode, All transmitted signals from a node are synchronized to one received signal.

Loop timing:

In this mode, the transmit signal in a optical link, east or west, is synchronized to the received signal from the same optical link.

Through timing :
In this mode, the transmit signal in one direction of transmission around the ring is synchronized to the received signal from that same direction of transmission.

Detailed operation of BLSR & Squelching.


Operation Traffic flow :
Bi-directional traffic between two nodes is transported over a subset of the "ring sections" or "spans". In this configuration, Minimum capacity equals line rate. Capacity is in general expressed as number of AU4, or bandwidth. The bandwidth is provided by an integer number of AU4 payload.

Maximum bandwidth capacity :


Here, each span has, in each direction, a capacity of up to half the number of AU4 in the STM-N (i.e. 8 AU4 for an STM-16 section). All traffic from a node goes to adjacent nodes. Max. capacity = 0.5 (line rate) x number of nodes. Note: This Max is achieved only of the working traffic is transported only between two adjacent nodes.

Extra Traffic:
We can utilize shared protection bandwidth for Extra traffic. This extra traffic is not protected & it could be lost when a failure of working traffic occurs

Operations Fiber Cut :


Let us consider a scenario, where fiber cuts between A&B. We have a working traffic from A-C and C-A. This failure interrupts A-C and C-A traffic . Now Node A and Node B detect failure

Now node A and node B will switch the traffic to protection path. No dedicated protection bandwidth - only used when protection required. Only nodes next to the failure know about the protection switch. No traffic lost.

Operations Node Failure:


Let us consider that we have live traffic from D-F and F-D. If node B fails, Failure interrupts DF and F-D traffic. Node A and C detect failure

Now Both node A & C switch the traffic to protection channels. Only nodes next to the failure know about the protection switch. In this scenario, only Traffic to/from failed node lost.

Squelching Problem :
When a node fails, traffic terminating on those nodes cut off by failures could be misconnected to other nodes on the ring in case of using a local fail-over decision . Consider a scenario, where we have active traffic from Node F-B , B-F and E-B, B-E. If Node B fails,

Squelching misconnection occur : Node F now talking to Node E instead of Node B

This can be avoided by path AIS Insertion. STM Path AIS is inserted instead of the looped STM-1#7. No mis-connections

Squelching Summary :
Squelching is in general used when extra traffic is used, it is used when normal traffic is switched to the protection entity and replaces the extra traffic. Squelching prevents that in case protection switch is active the normal traffic is output instead of the original extra traffic by outputting AU-AIS. You can also read clause 7.2.3.2 of ITU-T G.841 Squelching is required to assure that misconnections are not made. It is required for bidirectional line switched rings only, since it is the only ring to provide a reuse capability of STM-1s around the ring. This is only required when nodes are cut offfrom the ring. Also this is only required for traffic terminating on the cut off nodes.

A ring map that includes all STM and VC Paths on the ring is available at every node on the ring. Squelching is also required for extra traffic since the extra traffic may be dropped when a protection switch is required

PDH Basics | PDH Interview questions


Before 1970, worlds telephony systems were based on single line, voice frequency, and all connections were over twisted copper pair. During early 1970s digital transmission systems began to appear using Pulse Code Modulation (PCM). PCM enables analogue waveforms such as speech to be converted into a binary format suitable for transmission over long distances via digital systems. PCM works by sampling the analogue signal at regular intervals, assigning a binary value to the sample and then transmitting this value as a binary stream. This process is still in use today and forms the basis of virtually all the transmission systems that we currently use.

Next step was multiplexing several PCM together over the same copper pair. A standard was adopted in Europe where thirty-two, 64kbit/s channels were multiplexed together to produce astructure with a transmission rate of 2.048 Mbit/s (usually referred to as 2 Mbit/s). As demand for telephony services grew, Four X 2 Mbit/s signals were combined together to form an 8 Mbit/s signal (actually 8.448 Mbit/s). , This is because 2 Mbit/s signal was not sufficient to cope with the demands of the growing network, and so a further level of multiplexing was devised. Further, additional levels of multiplexing structure were added to include rates of 34 Mbit/s (34.368) and 140 Mbit/s (139.264). These transmission speeds are called Plesiochronous Digital Hierarchy or PDH rates. PLESI means Similar CHRONOUS means timing. This means that PDH equipment operates with similar timing but not exactly the same timing.

Comparison of hierarchical PDH rates :


A different hierarchical structure was adopted in North America. Comparison between two systems are given below.

Disadvantages of PDH networks


PDH signal is structured in such a way that, it is impossible to extract a single 2 Mbit/s signal from within a higher order (say 140 Mbit/s) stream. In order to cross connect 2 Mbit/s signal between one transmission system and another, it must be de-multiplexed back down to its primary rate first. This forms a multiplexer mountain.

So, we need to have a lot of equipments just to connect 2 Megs together. Due to this :

More usable space & power is taken up in racks in node sites by these equipment mountain & cause more maintenance-associated problems.

Equipment in different hierarchical levels synchronised from a different source and at a different rate, which may lead to clocking problems that can cause errors.

For a simple 2 Meg signal, jumpering needs to be done at all levels, that make up the individual transmission system. This leads to large amounts of physically bulky coax wiring.

Efficient use of bandwidth is achieved in PDH due to the fact that, it is having small overhead. But this limits the management ability of PDH. :

Automatic storage of route information is not available which leads to the requirement of accurate paper records to avoid problems.

It is not possible to remotely configure equipment and the alarm monitoring is only limited to reporting loss of inputs.

Protection of the transmission paths is generally using 1+1 protection and available at the higher PDH levels i.e.140 Mbit/s and above only.

Summary of PDH Limitations


Interconnection between different national systems were difficult (European/North American).

Clocking in different hierarchy levels are done individually, so slips possible.

'Multiplexer mountain' is costly and inflexible.

Limited management functionality

Path Protection available at higher rates only.

While comparing to today standards, more Prone to faults.

All these systems works fine in a stand-alone hierarchy. But it does make international interconnection very difficult and costly. This was the major reason for the development of a new internationally agreed standard (SDH).

What should the OSNR values be in DWDM networks?

By: Jean-Sbastien Tass | Category: Optical/Fiber Testing | Posted date: 2012-12-06 | Views: 1563

I was surfing the Web when I ran into this question on a social media platform. Since this is a very important question that calls for a detailed answer, I figured I would break it down for you here. The short answer is that in general, you should target an OSNR value greater than 15 dB to 18 dB at the receiver, but this value will depend on many factors. The long answer is that many factors need to be taken into account, including: modulation format, data rate, location in the network, type of network and the target BER level. Here are a few initial guidelines: Dependency on the location. The required OSNR will be different for different locations in the light path. Closer to the transmitter, the OSNR requirement will be higher. Closer to the receiver, the OSNR requirement will be lower. This is because optical amplifiers and ROADMs add noise, which means that the OSNR value degrades after going through each optical amplifier or ROADM. To ensure that the OSNR value is high enough for proper detection at the receiver, the number of optical amplifiers and ROADMs needs to be considered when designing a network. Dependency on the type of network. For a metro network, an OSNR value of >40 dB at the transmitter might be perfectly acceptable, because there are not many amps between the transmitter and the receiver. For a submarine network, the OSNR requirements at the transmitter are much higher. Dependency on the data rate. As the data rate goes up for a specific modulation format, the OSNR requirement also increases. Dependency on the target BER. A lower target BER calls for a higher OSNR value.

Those were the guidelines, and now for some specifics. The OSNR values that matter the most are at the receiver, because a low OSNR value means that the receiver won't detect or recover the signal. The exact requirements at the receiver will vary from one manufacturer to

another (contact your system provider), but see the examples below for a few average OSNR figures to guarantee a BER lower than 10-8 at the receiver: 10 Gbit/s NRZ: OSNR greater than approximately 11 dB 40 Gbit/s NRZ: OSNR greater than approximately 17 dB 40 Gbit/s DPSK: OSNR greater than approximately 14 dB 100 Gbit/s NRZ: OSNR greater than approximately 21 dB 100 Gbit/s DPSK: OSNR greater than approximately 18 dB

Again, the rule of thumb is really that the OSNR should be greater than 15 dB to 18 dB at the receiver, but this value will depend on many factors. If you'd like to get more information about this topic, we recently conducted a webinar, in collaboration with Light Reading, that is now available online.

OSNR in Next-Gen Networks: Polarization-Resolved Optical Spectrum Analysis Allows Fast and Accurate In-Band OSNR Measurement

By: Francis Audet | Category: Challenges/Solutions, Optical/Fiber Testing | Posted date: 2010-10-03 | Views: 344

It has become widely accepted that, when applied to next-generation networks, traditional IEC-recommended optical signal-to-noise ratio (OSNR) measurement techniques fail to deliver the required accuracy. As these next-gen networks employ reconfigurable optical add-drop multiplexers (ROADM) and/or multiple-bit/symbol advanced modulation formats (as is the case with most 40 Gbit/s and 100 Gbit/s transmissions), use of traditional OSNR measurements leads to either over- or under-evaluation of the OSNR. When ROADMs were first introduced in the network topology, some commercial optical spectrum analyzers (OSAs) implemented the polarization-nulling method to measure OSNR within the filtered dense wavelength-division multiplexing (DWDM) channels. Unfortunately, the complexity of the networks has rendered polarization nulling unsatisfactory in many realistic cases. In this article, we will describe an approach that not only relies on the relative differences in the polarization properties of the data-carrying signal and noise but also leverages their respective spectral properties. Whats more, we will review both methods and discuss their limitations.

Polarization Nulling
The polarization-nulling approach is predicated on the normally realistic assumption that the light signal under test is typically highly polarized and that the superposed noise is not. The measurement setup includes a polarization controller, a polarization splitter and a dualchannel scanning monochromator (i.e. an OSA), as illustrated in Figure 1 (note that detection, signal processing and display electronics have been omitted for the sake of simplicity).

Figure 1. Schematic of an OSA used for the polarization-nulling technique

The polarization-nulling method involves the adjustment of the (internal) polarization controller in order to extinguish, to the highest degree practicable, the signal in one of the two detection channels of the OSA. Such adjustment corresponds to one of the two outputs of the polarization splitter (acting as a polarizer) to be aligned orthogonally with respect to the (polarized) signal. When this is achieved, the only light reaching the detector of that branch, at that particular wavelength, corresponds to half the noise power (since noise is assumed to be unpolarized, and hence split equally between the two branches). This provides a measured noise level, and since the OSA also measures the total power (noise + signal), the OSNR for the particular wavelength can be calculated. In order for polarization nulling to provide acceptable measurements, the DWDM channels under test and the OSA itself must meet the following criteria: Polarization-mode dispersion (PMD) on the link should be very modest, preferably near zero. PMD partially depolarizes the signal within the resolution bandwidth (RBW) of the OSA, so complete nulling becomes impossible. In practice, there is some tolerance to PMD, especially if the OSNR that will be measured is not extremely high (e.g., <20 dB) and the RBW of the OSA is very narrow. Nonetheless, it ultimately limits the maximum attainable OSNR values; The polarization optics in the OSA, notably the polarization splitter, must also be able to provide a very high extinction ratio since the maximum measurable OSNR is directly dependent on the extinction ratio; The polarization controller in the OSA must be able to rapidly find, for each of several wavelengths within the DWDM channel, the state of polarization (SOP) condition corresponding to the maximum extinction ratio. However, for OSNR values greater than 20 dB, it can become very time-consuming (i.e., long acquisition times) and in fact, may not even be practical to measure accurately. This is exacerbated when one desires to characterize multiple DWDM channels simultaneously, for instance 16 or more channels extending over much of the C band.

Polarization-Resolved Optical Spectrum OSNR (PROS OSNR)


Like polarization nulling, the polarization-resolved optical spectrum OSNR (PROS OSNR) approach exploits the characteristics of the different degrees of polarization of the signal under test (i.e., the data-carrying signal) and the superposed noise to measure the OSNR within a DWDM channel. Again, as with polarization nulling, a polarization-diverse OSA arrangement (similar to that illustrated in Figure 1) is required to undertake the measurements. However, unlike polarization nulling, which requires (near) complete extinction of the signal in one arm (e.g., 30 dB extinction is required for 20 dB OSNR sensitivity with 0.5 dB uncertainty), a few decibels of difference in the orthogonally-analyzed measured signals of the polarization-diverse OSA suffice to measure an OSNR in excess of 20 dBand considerably higher OSNR values can be measured with resolved differences greater than 10 dB, which is still far from complete extinction in most cases (e.g., 20 dB OSNR sensitivity with the same 0.5 dB uncertainty can be achieved with less than 5 dB extinction). Hence, nulling of the signal, i.e., essentially complete extinction of the signal in one of the two OSA channels, is not necessary to obtain excellent OSNR sensitivity with the PROS OSNR approach. As an indication of the degree to which the PROS OSNR method is more efficient, it is frequently asked how the polarization-nulling and the PROS OSNR methods, although both based on a common basic principle (i.e., differential polarization response of the signal versus the noise) and both employing essentially the same measurement hardware, can exhibit such a substantial difference in OSNR measurement performance. To highlight this, an analogy may be useful: a mechanical lever permits a heavy load to be displaced over a short distance by using much lesser force, but this comes at the expense of applying this force over a longer distance. Here, differential polarization response leverages the noise measurement capability at the expense of requiring measurements to be taken over a certain wavelength range. Put another way, provided that the previously mentioned conditions have been satisfied, in principle, polarization nulling can measure the underlying noise level at a single wavelength without the acquisition of an optical spectrum. PROS OSNR, on the other hand, requires measurements to be taken over at least a few wavelengths within the DWDM channel of interest to obtain the noise level. In practice however, the spectrum of the light within the DWDM channel is in any case usually measured with polarization nulling in order to calculate the OSNR from the noise level at the optimal position within the channel and hence, obtaining a spectrally-resolved polarization difference response does not result in any additional practical drawbacks with respect to the polarization-nulling approach. As mentioned above, instead of requiring one of the two (orthogonally-analyzed) signals detected in the OSA to be completely nulled, polarization-resolved optical spectrum only requires a few decibels of difference, which can be achieved much more rapidly than full nulling. The detected spectra in each OSA channel, comprising of different signal levels but with overall similar noise contribution, are then subtracted from each other. The two noise contributions are equal and hence are cancelled, resulting in a spectrum difference that is proportional to the polarized signal only (i.e., noise-free).

Figure 2. The polarization-diversity approach splits the signal without requiring nulling on either branch

In this way, not only does the OSA acquire the contribution of the signal only (without noise), it also provides the signal shape down to, and even below, the noise level, which contributes to the performance of the polarization-resolved optical spectrum method.

Figure 3. Total signal, cross-polarized signals and reconstructed signal

Since full polarization extinction is not required, the PROS OSNR approach is much more robust than polarization nulling, since: It is not as susceptible to PMD-induced depolarization (as long as a part of the signal is polarized, sufficient discrimination is achieved) compared to polarization nulling, which assumes a completely polarized signal within the RBW); It is not limited by the polarization-extinction ratio, and hence much larger OSNR can be measured accurately, or equivalentlyfor a given OSNR sensitivity, the measurement can be performed much more rapidly; It is much faster to achieve a few decibels of diversity than complete nulling of a signal, especially for multiple-wavelength transmission (e.g. DWDM).

In order to fully exploit the PROS OSNR approach, it is important to properly generate the reconstructed signal corresponding to the polarization-resolved difference spectrum, proportional to the (data-carrying) signal but not all the signal. To do so, a proportionality factor must be used to determine the effective contribution of the signal. There are many

ways to obtain this proportionality factor: a particularly robust approach, fully exploiting the benefits of PROS OSNR determination, will now be presented in more detail. With the polarization-resolved optical spectrum approach, the spectrum is acquired at the narrowest OSA resolution bandwidth (RBW), and the subsequent signal processing analysis is carried out with several different numerically-synthesized wider RBWs. Assuming that the signal spectrum and the noise spectrum have different shapes (ideally, the noise spectrum will not vary in wavelength over most of the data-carrying signal bandwidth), as the RBW increases, the relative contribution of the (data-carrying) signal and noise will vary, and this variation will be more marked on the steep slopes of the (data-carrying) signal spectrum. Analysis of this wavelength-dependent ratio allows the proportionality relation to be determined to complete the signal reconstruction.

Figure 4. Varying the effective bandwidth by data processing enables more or less power to be included in the analysis

Figure 4 illustrates a signal analyzed using the polarization-resolved optical spectrum OSNR (PROS OSNR) method. First, the channel light is detected using two different effective RBWs (BW1 and BW2). This by itself does not yield relevant signal or noise information, but since the signal shape was reconstructed (light blue curve in Figure 4) using the polarizationresolved optical spectrum method, the signal contribution difference corresponding to the larger bandwidth (BW2) compared to the narrower bandwidth (BW1) is known. Therefore, one can also determine the noise within that bandwidth. Furthermore, since this step is undertaken during post-processing and not during acquisition, several bandwidths may be chosen and analyzed to optimize accuracy and repeatability. The PROS OSNR analysis helps overcome the practical limitations inherent to the polarization aspects of the technique and allows the determination of the noise profile. Due to its flexibility, notably in finding the proportionality factor for reconstructing the (data-carrying) signal, the PROS OSNR technique yields noise-profile information even when the noise is more narrowly, spectrally carved than the signal spectral extent, as can be the case for example with 40 Gbit/s differential phase-shift keying (DPSK) modulated signals passing

through multiple cascaded ROADMs and amplifiers, or when the noise is not split equally between the two branches of the OSA (slightly polarized noise, e.g., from polarization dependent loss (PDL) encountered by the ASE).

Comparing the Approaches


In the cases of either high PMD or high OSNR, PROS OSNR significantly outperforms an approach that is polarization-based only, especially one based solely on polarization nulling. Figure 5 illustrates the performances of three different sets of measurements as a function of OSNR: one based on polarization nulling, the two others use PROS but with different RBWs. The plotted uncertainty comprises both a systematic offset from the expected value (based on a carefully pre-calibrated test bed) and a random measurement uncertainty (standard deviation).

Figure 5. Total OSNR uncertainty vs. OSNR for different OSNR measurement techniques

As predicted, PROS remains highly accurate for events with high OSNR with a much faster testing time. As well, for such a measurement approach, a more expensive OSA having very sharp resolution bandwidth does not offer any marked advantages for measuring OSNR. Figure 6 below shows the same three OSAs at a given OSNR level (25 dB), the presence of PMD induces an additional random measurement uncertainty, which is more significant with the PN method. (To better highlight this additional random PMD-induced uncertainty, the large systematic offset caused by the susceptibility of the polarization nulling method to PMDinduced depolarization is not plotted here.)

Figure 6. Standard deviations of PMD-induced random uncertainty for different methods (strong-coupled PMD emulator)

Hence, in addition to having a limited achievable extinction and thus impaired ability to measure large OSNR, the polarization nulling approach suffers from augmented random uncertainty as the PMD increases. Since most of the advanced modulation formats being employed in next-generation networks tolerate higher link PMD, it will be increasingly important that OSNR measurement on such networks be relatively tolerant to PMD.

Conclusion
Networks are increasing in their complexity, both in terms of cascaded filtering (e.g., via ROADMs in mesh networks) and (OSNR-sensitive) multiple-bit/symbol modulation formats. For instance, intra-channel noise will increasingly be spectrally carved by filters, PMD tolerance of the networks will be increased and the signal bandwidths will frequently be as large as the effective channels widths. But nevertheless, OSNR remains a critical network performance parameter. Purely polarization-based OSNR measurement techniques (e.g., polarization nulling) can perform well when networks and noise sources remain simple, but as demonstrated herein, the robustness and performance of PROS-based OSNR measurement renders it well suited for advanced network architectures and modulation formats.

Generic Framing Procedure for Mapping 10 GigE LAN PHY over Optical Transport Networks

By: Mai Abou-Shaban | Category: Challenges/Solutions | Posted date: 2011-03-03 | Views: 386

Traffic growth and continuous expansion of enterprise local area network (LAN) traffic has proven to be a major driver for transporting Ethernet across metropolitan and core optical networks. Employing optical transport network (OTN) as per ITU-T G.709 standard provides a cost-efficient transport layer that supports the dense wavelength division multiplexing (DWDM) transparency while leveraging many SONET/SDH concepts. This allows OTN to serve as a converged transport layer for new packet-based and existing time division multiplexing (TDM) services. OTN line rates and their corresponding payloads were originally defined to match SONET/SDH signals. Unfortunately, the 10 GigE LAN PHY signal (10.3125 Gbit/s) does not fit into a standard OTU2 signal with its OPU2 (9.995 Gbit/s) payload rate. Using over-clocked OTN ratesOTU1e (11.0491 Gbit/s) and OTU2e (11.0957 Gbit/s) as per ITU-T Series G, Supplement 43 provides a straightforward solution by simply increasing the OPU2 container rate appropriately. However, the resulting over-clocked rates do not support interfacing with standard OTU2 clients; limiting their deployments to point-to-point type configurations. Alternatively, a new approach proposed in ITU-T G.7041 generic framing procedure (GFP) (amendment 1) and ITU-T G.709 OTN (amendment 3) retains the benefits of a standard OTN signal, such as frame structure and line speed, while adopting a cost-efficient mapping solution for 10 GigE LAN PHY client signals. This approach takes a 10 GigE LAN PHY signal at 10.3125 Gbit/s rate and intelligently segments it in order to map all of the information into a standard OTU2 (10.7G) signal. In this mapping process, the Ethernet preamble and ordered sets are transported while inter-frame gaps (IFG) and control codes (idle, error and reserved codes) are not. In addition, mapping the GFP-F frame into OTU2 does not require using the OTN OPU justification control overhead bytes. This frees bytes 1, 2, 3 and 4 of column 16 of the OTN frame in addition to the three reserved bytes in column 15 for a total of seven bytes to extend the OPU payload as shown in figure 1. In this mapping scheme, the payload type (PT) OPU overhead field is set to proprietary (0000 1001) called 64B/66B mapping of GFP-F frames.

Figure 1. Extended OPU2 payload used for mapping 10 GigE LAN PHY

Today, EXFOs FTB-8130NGE Power Blazer Next-Generation Multiservice Test Module offers a complete suite of capabilities for commissioning, turning-up and qualifying end-to-end 10 GigE services over OTN as shown in figure 2.

Figure 2. 10 GigE over GFP over OTN BER testing using FTB-8130NGE

Running the FTB-8130NGE in SONET/SDH Analyzer (SSA) mode, the user can configure a 10 GigE stream with pseudo-random bit sequence (PRBS) test pattern over GFP and into an OTU2 payload. In this process, the FTB-8130NGE Power Blazer provides full GFP statistics as well as OTN alarms and errors for qualifying the OTN transport layer. The bit-error-rate (BER) test is typically executed over an extended period of time before the customer link turn-up. Finally, using the FTB-8130NGE Power Blazer in Packet Analyzer mode provides 10 GigE LAN PHY RFC 2544 test for performance availability, transmission delay, link burstability and service integrity measurements, allowing carriers to certify the delivered 10 GigE service through their customers service-level agreements (SLAs).

Conclusion
Today, OTN enables the evolutionary next step in 10 GigE LAN PHY deployments while providing optimal utilization of transport capacity. EXFOs FTB-8130NGE Power Blazer offers a complete suite of testing capabilities for commissioning 10 GigE LAN services over OTN networks and for qualifying end-to-end services, providing a baseline for service providers to define SLAs with their customers.

Optical Spectrum Analyzers in Next-Gen Networks

By: Francis Audet | Category: Challenges/Solutions, Optical/Fiber Testing | Posted date: 2010-09-03 | Views: 260

Several high-bandwidth networks have recently been upgraded to include reconfigurable optical add/drop multiplexers (ROADMs) to improve efficiency and flexibility. ROADMs allow networks to remotely change the amount of dropped or added wavelengths on the express route in order to optimize bandwidthfor example, by not dropping wavelengths when bandwidth is not required. At the heart of a ROADM is a wavelength-selectable switch (WSS), which is used to redirect any wavelength to any direction; the WSS operates independent of color, direction and contention. While this does in fact offer network flexibility, scalability and security, it also changes the testing rules. The function of the receiver is to provide the demodulator with the cleanest electrical signal it can extract from the optical signal it receives. To determine how good a signal traveling through a dense wavelength-division multiplexer (DWDM) system is, an optical spectrum analyzer (OSA) is used to measure the optical signal-to-noise ratio (OSNR) of that signal. The OSA can characterize the head room between the peak power and the noise floor at the receiver for each channel; this value provides an indication of the readability of the received signal a parameter of increasing interest as the limits are pushed farther and farther to accommodate long-distance applications. The OSNR is the ratio between a signal, or meaningful information, and the background noise. OSNR provides indirect information about the bit-error rate (BER), which makes it the most useful parameter available from the measured spectrum, and that is why it is listed as an interface parameter in ITU-T Recommendation G.692 and in ITU-T Recommendation G.959.1.

Conventional OSNR Measurement Methods


The conventional method for determining the OSNR is called interpolation, as defined by the IEC-61280-2-9. It requires that the noise level be measured at the mid-point between two peaks and that a linear interpolation be performed. The noise under the peak can then be estimated, and the OSNR is calculated.

This method makes two assumptions, which were accurate for non-next-gen networks: first, it assumes that this noise is flat across the analysis band; and second, it assumes that the signal between channels does in fact go down to the noise level.

Assumption 1: Flat Noise


A schematic of a ROADM is illustrated below. ROADMs are typically placed at important nodes and since they replace routers, there is no signal regeneration within the ROADM itself. Therefore, an optical line amplifier is used to boost the signal before entering the ROADM. The WSS is then used to select which wavelength goes where at any given time.

A broadband signal, such as the amplified spontaneous emission noise of any of the amplifiers going through any WSS, gets filtered, and the output is more or less an image of the filters (or cascaded filters after several ROADMs).

It is clear, in this case, that the assumption that the noise is flat does not hold true. Using the interpolation method will therefore under-estimate the noise level, leading to an erroneous assessment that performance is better than it actually is and creating a false sense of security. Using Channel 13 in the image below as an example, a traditional OSA using the interpolation method would have measured the noise at the level of the red bar, whereas the true noise is at the blue bar. OSNR is not as high as measured and perceived, so it must be measured where it is significant; i.e., under the peak, referred to as in-band OSNR.

Assumption 2: Reaching Noise Level between Channels


The advent of faster bit rates, such as 40 Gbit/s for example, brings with it a much more complex and broader spectral shape, as shown in the image below:

In an effort to have as much bandwidth as possible, since most long-haul and metro networks are already prepared for 50 GHz channel spacing, this spacing needs to be kept when handling faster speeds. This means that optical signals are closely spaced and may overlap, thus hiding the noise baseline between them. Using the interpolation method will lead to over-estimation of the noise level, which in turn would suggest that the performance is not as good as it really is and that there is a problem on the network. Using Channels 73 and 74 in the image below as an example, a traditional OSA that makes use of the interpolation method would have measured the noise at the level of the red bar, whereas the true noise is at the blue bar. OSNR in this case is higher than measured and perceived. As mentioned previously, noise must be measured where it is significant; i.e., under the peak, referred to as in-band OSNR.

Dealing with the Mix

A complex next-gen network can very well have both 10 Gbit/s and 40 Gbit/s signals, all going through cascaded ROADMs, therefore applying both above-mentioned situations/assumptions. Here is an example:

The first highlighted zone, which includes the first 10 channels, is a condition of 10 Gbit/s transmission, where the noise floor is quite visible, but the noise is obviously not fl at (the section before 1545 nm proves this). Highlighted below is the mistake a traditional OSA would make:

Channel Number

Interpolation OSNR (dB) 24.09 23.28 22.84 22.80 22.41 22.00 22.48 23.08 23.14 23.92

True OSNR (dB) 21.27 22.57 21.93 20.65 19.90 19.34 19.28 19.79 20.40 20.49

Error (dB) 2.82 0.71 0.91 2.15 2.51 2.66 3.20 3.29 2.74 3.43

1 2 3 4 5 6 7 8 9 10

The second highlighted zone, from Channel 11 to Channel 22, reveals the other problem: larger signals hide the noise line (plus the fact that the noise is not flat):

Channel Number 11 12 13 14 15 16 17 18 19 20 21 22

Interpolation OSNR (dB) 14.98 14.32 14.44 13.4 15.28 14.63 14.85 14.55 13.4 13.63 15.06 17.69

True OSNR (dB) 17.73 16.65 17.93 18.1 19.12 18.1 17.49 18.55 17.63 17.79 18.39 20.35

Error (dB) -2.75 -2.33 -3.49 -4.70 -3.84 -3.47 -2.64 -4.00 -4.23 -4.16 -3.33 -2.66

Therefore, regardless of what type of next-gen network you have or are planning to deploy, if it includes either ROADMs or 40 Gbit/s and faster modulations, any traditional OSA will fail to produce a proper result. This is why next-generation OSAs, based on a technique known as hybrid detection, are required for accurate in-band OSNR measurements.

Tags:

Live Fiber Monitoring in CWDM NetworksPart 3

By: Olivier Plomteux | Category: Challenges/Solutions, Optical/Fiber Testing | Posted date: 2010-08-03 | Views: 293

Part 2 of this article examined out-of-band 1650 nm OTDR testing as well as in-band use of a CWDM channel for OTDR testing, in addition to taking an in-depth look at testing through a broadband tap coupler of CWDM optics.

Practical Examples of In-Band Remote Testing and Proactive Monitoring


While out-of-band testing, which uses 1650 nm, is used more and more in coarse wavelength-division multiplexing (CWDM) networks, the in-band approach, which uses one dedicated CWDM channel, is not used very much, although it provides network maintenance teams with additional benefits such as: Test path(s) can be established directly from the CWDM passive optics installed for transmission, if at least one channel is reserved for testing purposes; The loss budget remains the same with or without testing capabilities added; The attenuation measured at the test channel is the same or very close to the other channels, which reduces the difficulty to analyze if a degradation actually affects transmission; Selecting lower wavelengths (e.g., 1470 nm) can reduce the effect of Raman noise compared to using 1625 or 1650 nm, although larger effective mode area fibers available today tend to reduce these non-linearities.

A network test setup is presented below (on a reduced scale), and includes a test unit located at a ring major hub. This can be extended to a larger network by multiplying this schema. Each fiber pair is tested from the central location using a 1470 nm channel. Provision for no point of failure can be added at the opposite side of the link, enabling bidirectional testing. Test units are connected over TCP/IP in a ring topology, this creates a no-point-of-failure testing topology. When using a different channel, such as 1490 nm, each fiber can be tested from both sides at the same time. It can also help diagnose and locate specific fiber degradation that could otherwise be missed or reported with lower precision, if tested from

one side only.

Figure 1. CWDM simple network with OTDR monitoring on each live fiber: unidirectional or optional bidirectional on each strand.

In the above topology, each link extends up to 20 dB fiber attenuation, which is typically an attenuation that EXFOs CWDM optical time-domain reflectometer (OTDR) can measure in total from one end, while detecting and locating small degradations. A single fiber carrying CWDM in both directions closes the two long east/west pathsthis fiber can be tested with one additional unit since the hub test unit cannot measure well enough beyond the attenuation created by the fibers connected to it. Again, even for this fiber, surveillance and remote testing can be performed at 1470 nm or bidirectionally with 1470/1490 nm channels. Assuming that each CWDM filter adds a 2.5 dB loss, this requires that an OTDR be able to measure up to at least 25 dB optical attenuation, in which we include the optical switch unit as well as the jumper(s) insertion loss. This can be achieved by using a 10 s pulse on the OTDR and requires no more than an acquisition time of 15 seconds. The dynamic range, as per standard OTDR definition, is 40 dB for this module.

Figure 2. 10 s pulse width; 15-second OTDR acquisition on a live CWDM fiber with 20 dB attenuation.

The above trace was taken with 10 dBm CWDM TX injection power in live-fiber conditions. The OTDR detects back-scattered signals but also detects the diffused or scattered power from the other signals, which cannot be totally eliminated using extra band-pass filters. Below, we compare the same link tested dark and live, as per above conditions (in red is the dark fiber):

Figure 3. Comparison between a dark (red) and live (black) fiber; live is 10 dBm injected power into the fiber being tested. Extra isolation was added around the OTDR channel for cases where transmitters are at the opposite side of the link or when an OTDR at opposite side is testing at same time

The figure below illustrates an implementation example of cable and fiber monitoring from central location to two remote premises connected optically from end to end. Here, we assume that the distances and attenuation are consistent with CWDM standards from the central to a remote location. The CWDM OTDR is used at 1470 nm; it monitors the two live fibers in the main loop in co- and counter-propagation and at the passive nodes, the OTDR signal from one of the fibers is re-directed to a maintenance dark fiber of the access line.

Figure 4. Implementation example in a CWDM all-optical network, connecting remote sites such as base station or SAN/datacenter to central location with a redundant path. OTDR at 1470 nm is used on the fiber pair to test the metro-loop live fibers. One of the two metro fibers would continue and test the access cable integrity using a dark fiber.

When Is Bidirectional Monitoring on the Same Fiber Required?


In the context of establishing fiber surveillance on live CWDM links, one may ask: Should I monitor those links both directions? With the CWDM optical link budget in the 28 to 30 dB range, minus the CWDM filter insertion loss (estimated 5 dB total) and a 3 dB margin, CWDM communication can be established on fibers with up to 20 to 22 dB fiber attenuation at the most. For these levels of fiber attenuation, the ability to detect and locate a large and/or

reflective degradation can be precise and easily done with a single-ended, unidirectional OTDR test. The following are cases where it is suggested to establish bidirectional testing: Monitor with higher resolutionusing a 10 s pulse width, as per the above results, leads to inherent difficulty in locating a degradation with high accuracy. Testing with 1 or 2.5 s pulse width increases this accuracy but reduces the measurement range that can be achieved; If the link is not spliced and reflections exist, the reflecting events measured on the OTDR may hide events that are further down the fiber in the event of attenuation dead-zone. Testing from the opposite side can provide more details about the actual nature and more accurate position of the degradation in this case; The optical network is designed to scale-up using DWDM channels on the 1530 and 1550 nm CWDM channels; this could hike-up the TX power in the fiber due to Raman spontaneous emission scattering (if the measurements get impaired by this undesirable noise, then testing from both sides may become a necessity for adequate coverage); No point of failure is required for monitoringif the fiber has to be monitored on 100% availability, then testing from both sides becomes a must.

Figure 5. An OTDR scan with 2.5 s pulse width (in black) and 10 s pulse width (in red) both with 15 second acquisition time. With 2.5 s pulse width, 1 to 2 dB degradation in the last section of the fiber would barely be detected, especially if it is non-reflective (i.e., splice degradation or bend). It may then be required to test from both sides of the fiber, as proposed at two different CWDM wavelengths.

Monitoring with High- or Low-Resolution OTDR Measurement


As indicated above, the ability to precisely locate a fiber fault depends on the test parameters used, mainly OTDR pulse width and averaging time, but also where the event or the attenuation level from the injection point, as well as the nature of the degradation, are

located. In some very long links, if the degradation occurs at the far end of the link, it becomes useless to try to test with a shorter pulse width, as averaging up to three minutes time will typically increase measurement range by for 1 to 2 dB only. Then one has no other choice than to test with a longer pulse width. In this case, it is important to remember that the event positioning accuracy is degraded compared to smaller pulse width (e.g., 1 s or 2.5 s). In the case of a reflective event, as illustrated below, testing with multiple pulse widths, leads to a noticeable difference in the positioning accuracy:

Figure 6. A connector with 2 dB degradation (typically should be 0.2 to 0.5 dB) tested with various OTDR pulse widths. Inherently, the analysis for positioning the event cannot be made with the same accuracy.

EXFO at the Forefront of CWDM Testing


NQMSfiber is a centralized test and monitoring system solution that provides your maintenance organization with a tool to better manage your precious resources, get early warnings of fiber issues and speed-up your outside-plant restoration process. Once again, EXFO demonstrates its innovation in applying its CWDM OTDR technology to the fibermonitoring application. This application opens the door to in-band testing without any traffic impairment nor additional budget expenses to implement an out-if-band test route, while also enabling testing at all CWDM channels from one central site, for testing of circuits before service activation.

Tags:

Live Fiber Monitoring in CWDM NetworksPart 2

By: Olivier Plomteux | Category: Challenges/Solutions, Optical/Fiber Testing | Posted date: 2010-07-03 | Views: 313

Part 1 of this article addressed the design and implementation of preventive monitoring on line coarse wavelength-division multiplexing (CWDM) traffic-intensive optical fiber links.

Out-of-Band 1650 nm OTDR Testing


Out-of-band testing for coarse wavelength-division multiplexing (CWDM) requires the use of a 1650 nm optical time-domain reflectometer (OTDR) instead of 1625 nm because the last CWDM channel at 1611 nm is too close to 1625 nm, which is traditionally used for live fiber testing of 1310/1550 nm circuits. For this, a three-port wavelength-division multiplexer (WDM) needs to be added on the line to inject the OTDR signal to the fiber carrying CWDM channels. If it is not inserted prior to CWDM commissioning, it will not be possible to test live without interrupting traffic interruption to last long enough for device connection and test. Out-of-band testing adds a minimum of 1 dB penalty to the link budget, which needs to be taken into account at the circuit design stage. In cases where multiple sections need to be tested from a central site in one acquisition, then by-pass site(s) using DEMUX/MUX functions have to be planned, which increases link attenuation to another 1 dB.

Figure 1. Out-of-band OTDR testing at 1650 nm of CWDM links

Figure 2 By-passing a node for testing end-to-end.

At the end of the line, the 1650 nm test wavelength is usually rejected by the CWDM filter. Care must be taken to ensure that the CWDM filter at the end of the path is at least 30 to 35 dB isolation at 1650 nm. If not, the same WDM filter used for injection and by-pass can be installed at the end of line and act as a rejection filter for 1650 nm. Testing out-of-band at 1650 nm has the advantage of quickly revealing unwanted bending of the cable or the individual fiber strand, as this wavelength propagates in the outer diameter of the fiber core. The difference in fiber attenuation between 1650 nm from 1550 nm is not larger than the difference between 1310 nm and 1550 nm. In most fibers seen and available today, the fiber attenuation at 1650 nm is about 0.25 dB/km (typically) compared to 0.2 dB for 1550 nm. For a 60 km link tested at 1650 nm, this is within 3 dB from what would be measured at 1550 nm on a fiber that has no excessive bending. It is therefore recommended to have a construction trace at 1625 nm, if possible at 1650 nm, in order to record the difference of the attenuation measured at both wavelengths for consistency in the measurement taken before and after the fiber goes live. Obviously, the implementation of a 1650 nm testing path throughout the network can, in some cases, be troublesome and adds costs and complexity, which is not always welcome.

In-Band Use of a CWDM Channel for OTDR Testing


The in-band method uses one or multiple CWDM channels for OTDR testing. This is a straight connection of a CWDM-enabled OTDR into the MUX/DEMUX CWDM filter used for transmission.

Figure 3. In-band OTDR testing using one unused CWDM channel.

Despite the fact that one channel becomes dedicated to testing, in-band is a test method that can be implemented after the network has been commissioned, which is a great advantage compared to the out-of-band approach. It is also simple to establish a test route. When passive add/drops are used over a ring, the selected test channel will continue and follow the other express channels. When all channels terminate in a site and are regenerated for another section, it simply requires patching between the same channel ports on both sides of the route. Either the transmit or the receive port can be used.

Figure 4. In-band OTDR testing using 1470 nm. For establishing end-to-end test, route patching is done in intermediate sites between 1470 nm ports of the CWDM MUX/DEMUX. Only one fiber, either transmit or receive is presented here. For fiber pair communication, two test ports of the test equipment have to be used.

It is interesting to note that service channel (SC), or expansion channel (EC) at 1310 nm or any other wavelength, if available and not in use, can also be selected for in-band testing, similarly to above with less stringent requirements in terms of channel bandwidth for the tolerance on central OTDR wavelength.

Testing through Broadband Tap Coupler (Monitor Port) of CWDM Optics


Some CWDM MUX/DEMUX/ADM devices are equipped with taps (1 to 10%) in order to monitor traffic. These monitor ports are broadband couplers, and extract a portion of traffic usually for testing with protocol analyzers. When testing in opposite direction than traffic, the CWDM signal power, even though attenuated by the tap, will be detrimental to perform a

proper OTDR measurementunless one selects a filter to isolate the OTDR wavelength band from active CWDM channels.

Figure 5. Broadband tap coupler as a monitor port can be used for OTDR testing but care must be taken to isolate the OTDR test wavelength from active channels.

The isolation of the filter to place between the tap/monitor port and OTDR must be at least 50 to 60 dB when testing against traffic. It must be a band-pass type if the test wavelength is in-band. If the OTDR is out-of-band, a high-pass filter for 1650 nm can be used. For the case of in-band, one or two cascaded optical add/drop multiplexers (OADMs) can be used as bandpass filters. If the fiber transmits CWDM traffic at the same wavelength selected for testing, OTDR will fail measuring. Limitations with this approach are: Major drop (seen in literature up to 23 dB or less than 1%) in optical attenuation, cutting large amounts of the available OTDR measurement range; if the monitor port ratio was 10% (10 dB), it could be used The test direction is likely always against traffic at least in case of fiber pair transmission, which doubles the number of test units to install, one at each end. For single fiber bidirectional transmission, this limitation does not apply Additional care must be taken to ensure proper isolation of the OTDR from attenuated but unfiltered CWDM channels

Be sure to read Part 3 of this article as it examines practical examples of in-band remote testing and proactive monitoring.

Tags:

OSNR in Next-Gen Networks: Polarization-Resolved Optical Spectrum Analysis Allows Fast and Accurate In-Band OSNR Measurement

By: Francis Audet | Category: Challenges/Solutions, Optical/Fiber Testing | Posted date: 2010-10-03 | Views: 345

It has become widely accepted that, when applied to next-generation networks, traditional IEC-recommended optical signal-to-noise ratio (OSNR) measurement techniques fail to deliver the required accuracy. As these next-gen networks employ reconfigurable optical add-drop multiplexers (ROADM) and/or multiple-bit/symbol advanced modulation formats (as is the case with most 40 Gbit/s and 100 Gbit/s transmissions), use of traditional OSNR measurements leads to either over- or under-evaluation of the OSNR. When ROADMs were first introduced in the network topology, some commercial optical spectrum analyzers (OSAs) implemented the polarization-nulling method to measure OSNR within the filtered dense wavelength-division multiplexing (DWDM) channels. Unfortunately, the complexity of the networks has rendered polarization nulling unsatisfactory in many realistic cases. In this article, we will describe an approach that not only relies on the relative differences in the polarization properties of the data-carrying signal and noise but also leverages their respective spectral properties. Whats more, we will review both methods and discuss their limitations.

Polarization Nulling
The polarization-nulling approach is predicated on the normally realistic assumption that the light signal under test is typically highly polarized and that the superposed noise is not. The measurement setup includes a polarization controller, a polarization splitter and a dualchannel scanning monochromator (i.e. an OSA), as illustrated in Figure 1 (note that detection, signal processing and display electronics have been omitted for the sake of simplicity).

Figure 1. Schematic of an OSA used for the polarization-nulling technique

The polarization-nulling method involves the adjustment of the (internal) polarization controller in order to extinguish, to the highest degree practicable, the signal in one of the two detection channels of the OSA. Such adjustment corresponds to one of the two outputs of the polarization splitter (acting as a polarizer) to be aligned orthogonally with respect to the (polarized) signal. When this is achieved, the only light reaching the detector of that branch, at that particular wavelength, corresponds to half the noise power (since noise is assumed to be unpolarized, and hence split equally between the two branches). This provides a measured noise level, and since the OSA also measures the total power (noise + signal), the OSNR for the particular wavelength can be calculated. In order for polarization nulling to provide acceptable measurements, the DWDM channels under test and the OSA itself must meet the following criteria: Polarization-mode dispersion (PMD) on the link should be very modest, preferably near zero. PMD partially depolarizes the signal within the resolution bandwidth (RBW) of the OSA, so complete nulling becomes impossible. In practice, there is some tolerance to PMD, especially if the OSNR that will be measured is not extremely high (e.g., <20 dB) and the RBW of the OSA is very narrow. Nonetheless, it ultimately limits the maximum attainable OSNR values; The polarization optics in the OSA, notably the polarization splitter, must also be able to provide a very high extinction ratio since the maximum measurable OSNR is directly dependent on the extinction ratio; The polarization controller in the OSA must be able to rapidly find, for each of several wavelengths within the DWDM channel, the state of polarization (SOP) condition corresponding to the maximum extinction ratio. However, for OSNR values greater than 20 dB, it can become very time-consuming (i.e., long acquisition times) and in fact, may not even be practical to measure accurately. This is exacerbated when one desires to characterize multiple DWDM channels simultaneously, for instance 16 or more channels extending over much of the C band.

Polarization-Resolved Optical Spectrum OSNR (PROS OSNR)


Like polarization nulling, the polarization-resolved optical spectrum OSNR (PROS OSNR) approach exploits the characteristics of the different degrees of polarization of the signal under test (i.e., the data-carrying signal) and the superposed noise to measure the OSNR within a DWDM channel. Again, as with polarization nulling, a polarization-diverse OSA arrangement (similar to that illustrated in Figure 1) is required to undertake the measurements. However, unlike polarization nulling, which requires (near) complete extinction of the signal in one arm (e.g., 30 dB extinction is required for 20 dB OSNR sensitivity with 0.5 dB uncertainty), a few decibels of difference in the orthogonally-analyzed measured signals of the polarization-diverse OSA suffice to measure an OSNR in excess of 20 dBand considerably higher OSNR values can be measured with resolved differences greater

than 10 dB, which is still far from complete extinction in most cases (e.g., 20 dB OSNR sensitivity with the same 0.5 dB uncertainty can be achieved with less than 5 dB extinction). Hence, nulling of the signal, i.e., essentially complete extinction of the signal in one of the two OSA channels, is not necessary to obtain excellent OSNR sensitivity with the PROS OSNR approach. As an indication of the degree to which the PROS OSNR method is more efficient, it is frequently asked how the polarization-nulling and the PROS OSNR methods, although both based on a common basic principle (i.e., differential polarization response of the signal versus the noise) and both employing essentially the same measurement hardware, can exhibit such a substantial difference in OSNR measurement performance. To highlight this, an analogy may be useful: a mechanical lever permits a heavy load to be displaced over a short distance by using much lesser force, but this comes at the expense of applying this force over a longer distance. Here, differential polarization response leverages the noise measurement capability at the expense of requiring measurements to be taken over a certain wavelength range. Put another way, provided that the previously mentioned conditions have been satisfied, in principle, polarization nulling can measure the underlying noise level at a single wavelength without the acquisition of an optical spectrum. PROS OSNR, on the other hand, requires measurements to be taken over at least a few wavelengths within the DWDM channel of interest to obtain the noise level. In practice however, the spectrum of the light within the DWDM channel is in any case usually measured with polarization nulling in order to calculate the OSNR from the noise level at the optimal position within the channel and hence, obtaining a spectrally-resolved polarization difference response does not result in any additional practical drawbacks with respect to the polarization-nulling approach. As mentioned above, instead of requiring one of the two (orthogonally-analyzed) signals detected in the OSA to be completely nulled, polarization-resolved optical spectrum only requires a few decibels of difference, which can be achieved much more rapidly than full nulling. The detected spectra in each OSA channel, comprising of different signal levels but with overall similar noise contribution, are then subtracted from each other. The two noise contributions are equal and hence are cancelled, resulting in a spectrum difference that is proportional to the polarized signal only (i.e., noise-free).

Figure 2. The polarization-diversity approach splits the signal without requiring nulling on either branch

In this way, not only does the OSA acquire the contribution of the signal only (without noise), it also provides the signal shape down to, and even below, the noise level, which contributes to the performance of the polarization-resolved optical spectrum method.

Figure 3. Total signal, cross-polarized signals and reconstructed signal

Since full polarization extinction is not required, the PROS OSNR approach is much more robust than polarization nulling, since: It is not as susceptible to PMD-induced depolarization (as long as a part of the signal is polarized, sufficient discrimination is achieved) compared to polarization nulling, which assumes a completely polarized signal within the RBW); It is not limited by the polarization-extinction ratio, and hence much larger OSNR can be measured accurately, or equivalentlyfor a given OSNR sensitivity, the measurement can be performed much more rapidly; It is much faster to achieve a few decibels of diversity than complete nulling of a signal, especially for multiple-wavelength transmission (e.g. DWDM).

In order to fully exploit the PROS OSNR approach, it is important to properly generate the reconstructed signal corresponding to the polarization-resolved difference spectrum, proportional to the (data-carrying) signal but not all the signal. To do so, a proportionality factor must be used to determine the effective contribution of the signal. There are many ways to obtain this proportionality factor: a particularly robust approach, fully exploiting the benefits of PROS OSNR determination, will now be presented in more detail. With the polarization-resolved optical spectrum approach, the spectrum is acquired at the narrowest OSA resolution bandwidth (RBW), and the subsequent signal processing analysis is carried out with several different numerically-synthesized wider RBWs. Assuming that the signal spectrum and the noise spectrum have different shapes (ideally, the noise spectrum will not vary in wavelength over most of the data-carrying signal bandwidth), as the RBW increases, the relative contribution of the (data-carrying) signal and noise will vary, and this variation will be more marked on the steep slopes of the (data-carrying) signal spectrum. Analysis of this wavelength-dependent ratio allows the proportionality relation to be determined to complete the signal reconstruction.

Figure 4. Varying the effective bandwidth by data processing enables more or less power to be included in the analysis

Figure 4 illustrates a signal analyzed using the polarization-resolved optical spectrum OSNR (PROS OSNR) method. First, the channel light is detected using two different effective RBWs (BW1 and BW2). This by itself does not yield relevant signal or noise information, but since the signal shape was reconstructed (light blue curve in Figure 4) using the polarizationresolved optical spectrum method, the signal contribution difference corresponding to the larger bandwidth (BW2) compared to the narrower bandwidth (BW1) is known. Therefore, one can also determine the noise within that bandwidth. Furthermore, since this step is undertaken during post-processing and not during acquisition, several bandwidths may be chosen and analyzed to optimize accuracy and repeatability. The PROS OSNR analysis helps overcome the practical limitations inherent to the polarization aspects of the technique and allows the determination of the noise profile. Due to its flexibility, notably in finding the proportionality factor for reconstructing the (data-carrying) signal, the PROS OSNR technique yields noise-profile information even when the noise is more narrowly, spectrally carved than the signal spectral extent, as can be the case for example with 40 Gbit/s differential phase-shift keying (DPSK) modulated signals passing through multiple cascaded ROADMs and amplifiers, or when the noise is not split equally between the two branches of the OSA (slightly polarized noise, e.g., from polarization dependent loss (PDL) encountered by the ASE).

Comparing the Approaches


In the cases of either high PMD or high OSNR, PROS OSNR significantly outperforms an approach that is polarization-based only, especially one based solely on polarization nulling. Figure 5 illustrates the performances of three different sets of measurements as a function of OSNR: one based on polarization nulling, the two others use PROS but with different RBWs. The plotted uncertainty comprises both a systematic offset from the expected value (based on a carefully pre-calibrated test bed) and a random measurement uncertainty (standard deviation).

Figure 5. Total OSNR uncertainty vs. OSNR for different OSNR measurement techniques

As predicted, PROS remains highly accurate for events with high OSNR with a much faster testing time. As well, for such a measurement approach, a more expensive OSA having very sharp resolution bandwidth does not offer any marked advantages for measuring OSNR. Figure 6 below shows the same three OSAs at a given OSNR level (25 dB), the presence of PMD induces an additional random measurement uncertainty, which is more significant with the PN method. (To better highlight this additional random PMD-induced uncertainty, the large systematic offset caused by the susceptibility of the polarization nulling method to PMDinduced depolarization is not plotted here.)

Figure 6. Standard deviations of PMD-induced random uncertainty for different methods (strong-coupled PMD emulator)

Hence, in addition to having a limited achievable extinction and thus impaired ability to measure large OSNR, the polarization nulling approach suffers from augmented random uncertainty as the PMD increases. Since most of the advanced modulation formats being employed in next-generation networks tolerate higher link PMD, it will be increasingly important that OSNR measurement on such networks be relatively tolerant to PMD.

Conclusion
Networks are increasing in their complexity, both in terms of cascaded filtering (e.g., via ROADMs in mesh networks) and (OSNR-sensitive) multiple-bit/symbol modulation formats. For instance, intra-channel noise will increasingly be spectrally carved by filters, PMD tolerance of the networks will be increased and the signal bandwidths will frequently be as large as the effective channels widths. But nevertheless, OSNR remains a critical network performance parameter. Purely polarization-based OSNR measurement techniques (e.g., polarization nulling) can perform well when networks and noise sources remain simple, but as demonstrated herein, the robustness and performance of PROS-based OSNR measurement renders it well suited for advanced network architectures and modulation formats.

Tags:

Why Test for Chromatic Dispersion and Polarization Mode Dispersion


By: Francis Audet | Category: LTE/3G/4G, Optical/Fiber Testing | Posted date: 2010-03-03 | Views: 321

Chromatic Dispersion
Chromatic dispersion (CD) has been a known fact for almost a decade now, yet testing for it on OC-192/STM-64 is not always done de facto. This may lead to major problems, especially today due to the optical distances on the core network reaching unprecedented lengths, and also in the metro environment with the birth of the mesh network topology. But now, with the current penetration of 40 Gbit/s, or OC-768/STM-256, CD is becoming an even bigger issue. Most system providers find original ways to help deal with CD to some extent, but they cannot completely mitigate this challenge. For example, the native OC-192/STM-64 transmission format is non-return-to-zero transmission (NRZ). If this were to be used at OC768/STM-256, the CD tolerances would be such that compensation would be required after 6 to 7 km of G.652 singlemode optical fiber which is not a viable solution. On the other hand, no standards exist to define how 40 Gbit/s should be implemented; hence, every system provider has its own recipe, at very variable costs but also variable CD tolerance levels. The following chart shows some of these differences:

Modulation Format Symbol rate CD tolerance for 2 dB OSNR penalty Max. average PMD for 1 dB OSNR penalty

Gbaud ps/nm ps

On-Off NRZ 43.018 65 2.5

Duobinary 43.018 100 2.5

DQPSK 21.509 125 7

So even the most resistant OC-768/STM-256 (40 Gbit/s) can only tolerate a fraction of what OC-192/STM-64 (10 Gbit/s) can accept, which is approximately 1100 ps/nm. Prior to this, dispersion was compensated via simple dispersion-compensating modules (DCMs), which are deployed automatically and installed by system vendors. Plus, additional pre- or post-testing was not always required, and very seldom performed. In principle, while the DCMs included canceling the dispersion of the optical fiber spans, there are several reasons why this is not exact and why the residual dispersion must be taken into account:

The change in dispersion with wavelength is often somewhat different for the transmission optical fiber than for the compensators. This means that if the dispersion of the optical fiber is exactly matched for a channel in the middle of the DWDM, there will be a mismatch for channels on the edge, which will lead to a positive residual dispersion at one end and a negative residual dispersion at the other. While the optical fiber dispersion is fairly tightly constrained, there is a variation in the actual dispersion per unit length at a single wavelength due to optical fiber variability. For example, for G.652 optical fiber, the dispersion coefficient may vary from 16.9 to 18.2 ps/nm/km at 1550 nm. Likewise, there is a variation from module to module associated with DCMs. DCMs are usually only available in a set of values with discrete steps between them. In contrast, installation constraints (e.g., hut positions and available rights of way) mean that the span length in a practical system is often quite different from the nominal value, leading to residual dispersion at all channel wavelengths. In a mesh metro network where there is more than one possible route between two particular endpoints, the optical channel may be switched to an alternative route. In this case, while the residual dispersion of the primary path through the network may be acceptable, the residual dispersion of any alternative path that might be required must also be checked as it may be quite different.

Most of the installed optical fiber base, both in long-haul and metro networks, is standard singlemode optical fiber as defined by the ITU-T G.652. CD of such optical fiber can vary such as:

Channel Wavelength (nm) 1531.12 1546.92 1562.23

Min. (ps/nm/km) Very sensitive Very sensitive Sensitive

Max. (ps/nm/km) Sensitive Very sensitive Sensitive

In addition, the aforementioned DCM also has intrinsic uncertainty and variability. Here is an example with a 80 km DCM:

Channel Wavelength (nm) 1531.12 1546.92 1562.23

80 km DCM Min. (ps/nm) -1278 -1355 -1431

Max. (ps/nm) -1215 -1288 -1361

As previously mentioned, DCMs are usually only available in a set of values with discrete steps between themnot per kilometer. One selects the unit which is closest to the total distance to compensate for. So for a long-haul route of 495 km of optical fiber, DCMs will be applied for 500 km. To this, one needs to add the dispersion of all network elements, either erbium-doped fiber amplifiers (EDFAs), fixed or reconfigurable optical add-drop multiplexers (ROADM), etc. Typical perchannel CD for such elements is in the 30 ps/nm range.

Making the calculation for 495 km of optical fiber, plus six DCMs of 80 km and one of 20 km, plus four network elements (EDFAs), we get: 1531.12 nm minimum residual dispersion = 15.69 * 495 + 6.25 1278 + 4 30 = 341 ps/nm 1531.12 nm maximum residual dispersion = 17.10 * 495 + 6.25 1215 + 4 30 = 990.75 ps/nm

These results and those for the other wavelengths are summarized in the table below:

Maximum and Minimum End-to-End Residual Dispersion Values (Worst Case) Min. Channel Wavelength (nm) (ps/nm) 1531.12 -341 1546.92 -342 1562.23 -361
These values are very close to the acceptable limit for 10 Gbit/s transmission. The calculation can easily be redone with slightly different parameters (a few extra ROADMs in a mesh network, longer links in a long-haul network, etc.) and be over the acceptable limit for 10 Gbit/s. But as for 40 Gbit/s, depending on the implementation, the typical acceptable CD is often less than 125 ps/nm, making this residual CD much too high for 40 Gbit/s transmission, even though each individual 80 km section may be within tolerance. The advent of longer optical links and the birth of mesh networks put the poorly compensated residual CD very close to the danger zone for 10 Gbit/s transmission, and beyond the acceptable limit for 40 Gbit/s transmission. On the other hand, accurately testing for this parameter allows one to fine-tune each stage of compensation, as well as perform (if required) end-to-end compensation adjustments to remove excess residual dispersion.

Max. (ps/nm) 991 994 974

Polarization Mode Dispersion


Polarization mode dispersion (PMD) is a consequence of certain physical properties of optical fiber that result in distortion of optical pulses. These distortions result in dispersion of the optical pulse over time, as well as a reduction in peak power. While the outcome of PMDinduced distortion is similar to CD, it is not accumulated linearly, but stochastically. This means that its value cannot be predicted from one instant to the next, but its distribution (occurrence within a predictable range) can. Complicating factors that contribute to this unpredictability are as diverse as temperature change on optical fiber, wind load and turbulence on aerial optical fiber, and ice loading changing stresses on aerial optical fiber. One additional and significant constraint associated with characterizing and measuring PMD on optical fiber is the date of manufacture. In fact, optical fiber manufactured before approximately 1994 may not have been designed with PMD mitigation in mind. Because PMD is somewhat random (stochastic) and because many legacy networks have fiber manufactured before 1994, it is important to understand its contribution to signal degradation and minimize its impact. In short, PMD is not stable or generally predictable based on any known physical characteristic of the optical fiber at the time of manufacture. It must be measured in situ, i.e., after installation.

Many transmission-equipment manufacturers are working on new methods and techniques to minimize the impact of PMD on digital signals transmitted over a variety of optical fiber types. The dominant course of investigation centers on modulation methods. By carefully choosing modulation and demodulation methods, vendors hope to minimize the impact of PMD on the quality of their digital transmissions. While work continues on modulation formats and methods, these general thresholds must be applied to the decision-making process regarding the fitness of optical fiber links for specific modulation rates.

Bit Rate (Gbit/s) 2.5 10 20 40

Max. PMD (ps) 40 10 (with no FEC) 5 2.5

PMD Coefficient (ps/(sqrt(km)) <2.0 <0.5 <0.25 <0.125

Decision-Making Matrix
Deciding whether or not one should characterize a fiber-optic link or route for dispersion (both CD and PMD) will quickly become a challenging exercise. The dilemma arises when there is significant pressure to cut corners to save time or money. However, one must consider the following: a DWDM route or ring populated with 10 Gbit/s data rates will likely have one or more 40 Gbit/s wavelengths added in the future. At that time, it will be nearly impossible to temporarily remove dozens of active wavelengths from service to characterize the optical fiber carrying them. This one factor alone should motivate all service providers to fully characterize their optical fiber links while they can. Otherwise, they may find themselves with limited possibilities in the very near future and unable to increase bandwidth on their optical fiber links. Given that several factors must be considered before choosing to test an optical link for dispersion, this matrix will help with the decision-making process.

Conditions Increasing bandwidth to 40 Gbit/s Increasing bandwidth to 10 Gbit/s Network contains pre-1994 fiber DWDM network designed for 10 Gbit/s per wavelength but may move to 40 Gbit/s on one or more wavelengths in the future

<30 km Always Recommended Always Always

>30 km Always Always Always Always

It has been said that characterizing an optical fiber network is optional for short links. In fact, not knowing the other characteristics of the link, such as link loss and optical return loss, can and often will lead to the inability to fully utilize the amazing amount of bandwidth available in optical fiber. So, given the conditions that require testing dispersion and the fact that other physical parameters must be evaluated, it is prudent that service providers test and fully characterize all fiber links as early as possible in their normal lifecycle.

Characterizing Advanced Modulation Formats Using an Optical Sampling Approach

By: Franois Robitaille | Category: Challenges/Solutions | Posted date: 2011-02-03 | Views: 189

With bandwidth demand growing and fiber availability going down, carriers and equipment manufacturers have no choice but to turn to high-speed optical transmission systems to increase network capacity. However, traditional one-bit-per-symbol modulation schemes, such as OOK or DPSK, do not offer the spectral efficiency necessary to significantly increase the overall data-carrying capacity of optical fibers. Moreover, if scaled up to much higher transmission speeds, these traditional modulation formats become very sensitive to chromatic dispersion (CD) and polarization-mode dispersion (PMD), rendering them unusable on existing networks. The influential Optical Internetworking Forum (OIF) has recommended using fully coherent four-bit-per-symbol DP-QPSK (dual-polarization quadrature phase-shift keying) modulation format for 100 Gbit/s system design, since it is both spectrally efficient and highly resilient to CD and PMD (when coupled with suitable signal processing algorithms). In addition, the industry is already seriously looking at longer-term evolution with modulation schemes such as 16-QAM and OFDM. These radical changes in modulation formats bring huge challenges to equipment manufacturers and eventually carriers, as these modulation schemes cannot be characterized using traditional test instruments and methods that are sensitive only to the time-varying intensity of the signal light, and not to the phase of the light signal. Therefore, in addition to well-known eye diagram analysis, which provides intensity and time information for OOK signals, new measurements must be performed to retrieve the phase information, and constellation diagram analysis is the base of it. Different instruments can be used to recover the intensity and phase information critical to fully and properly characterize advanced modulation schemes like QPSK, 16-QAM or their dual-polarization versions: high-resolution optical spectrum analyzers (OSAs), modulation analyzers based on real-time electrical sampling oscilloscopes, as well as modulation analyzers based on optical sampling. Each approach has its advantages and disadvantages; therefore, when characterizing high-speed transmitters, it is important to understand the key elements affecting the constellation diagram recovery and the quality of the measurements that can be obtained from it.

Measurement Approaches
One approach to recover intensity and phase information of a signal is to employ a highresolution optical spectrum analyzer. Such sophisticated instruments use non-linear effects combined with a local oscillator to recover both amplitude and phase information from the signal. The intensity and phase vs. frequency information is then converted to time domain using a fast Fourier transform (FFT). However, this results in severe limitations, including the incapacity to capture long sequences of data or recover the information from framed OTU-4 data. The second (and probably most obvious) approach is to use a real-time electrical sampling oscilloscopei.e., sampling light with electronicsto acquire data samples in a similar way as any coherent receiver would do it, by sampling at the symbol rate. This approach yields very accurate information on the symbol position on the constellation and can provide bit-errorrate information on samples of the data stream. However, their limited effective bandwidth and the impedance mismatches typically found when interfacing with high-speed electronics make it impossible to obtain accurate transition information and perform distortion-free waveform recovery. Finally, an alternative approach is to employ a modulation analyzer based on optical samplingi.e., characterizing light with light. This approach uses short laser pulses as a stroboscope to effectively open a sampling gate in order to generate samples with energy proportional to the power of the input signal. These samples are then detected by lowerspeed electronics. The main advantages of this stroboscopic optical sampling approach are its very large effective bandwidth (arising from its high temporal resolution) and the absence of impedance mismatch, allowing distortion-free waveform recovery at signal-under-test symbol rates in excess of 60 Gbaud. Optical sampling oscilloscopes and modulation analyzers typically have an effective bandwidth 5 to 10 times larger than electrical sampling solutions.

Figure 1. Electrical vs. optical sampling techniques

An electrically sampled waveform is affected by the instrument, whereas an optically sampled waveform gives the true shape.

Figure 2. Waveform recovery comparison between electrical and optical sampling

Measurements that Match the Objectives


When designing transmitters or transmission systems, the objective is always the same: transmit as much data as possible, as fast as possible, in the smallest channel bandwidth possible, and making sure it can be received error-free at the other end. Several parameters will affect the performance of transmitters and systems, which is why performing the right optimization and verification is critical. Until now, for transmission based on more conventional modulation formats like NRZ (nonreturn to zero) or DPSK (differential phase-shift keying), sampling scopes providing eye diagram analysis have been used to characterize and optimize transmitters. Measurements like eye opening, signal-to-noise ratio (SNR), skew, extinction ratio, rise/fall time and jitter are obtained using such instruments. Advanced modulation formats with coherent transmission require similar analysis, yet since the information is not only included in the intensity of the signal but also in its phase, a constellation analyzer is the instrument required to test transmitters and systems. This requires new measurements, such as I-Q imbalance and error vector magnitude (EVM), which refers to the deviation of the measured constellation when compared to the ideal constellation, phase or intensity error, to gather key information on the tuning of the modulator or pulse carver, for example. If dual-polarization transmission is used, variations between the two polarizations also need to be identified and measured since they may indicate problems in the transmitter design or balance. At the system level, one of the parameters generally used to qualify performance and considered the ultimate indicator that the data sent through the network is received without errorsis the bit error rate (BER). When transmission is good and BER is low, there is no need for additional troubleshooting. A high BER value, however, triggers key questions: What

if the BER increases to a point where the system cannot completely correct errors? What information can be gleaned from the BER apart from revealing that the system is not working? This is when it is critical to rely on a test instrument able to provide distortion-free waveform recovery and precise transition information to allow engineers to identify the causes of the transmission problems, such as crosstalk between polarizations, dispersion, signal-to-noise ratio, etc.

Figure 3. Constellation diagram showing minor chromatic dispersion impact

Figure 4. Impact of poor SNR on constellation diagram

Bit-Error-Rate Measurement Using a Modulation Analyzer


As mentioned above, BER is an important test to carry out on any transmission system. When performing this test using a modulation analyzer, whether based on electrical real-time sampling or optical sampling, a very important limitation must be taken into consideration.

Over sufficiently long periods (i.e., more than a few seconds), only a small percentage of the symbols are actually sampled, either in a short duty cycle burst mode, as is ge nerally the case with electrical real-time sampling, or in an undersampling manner, as with equivalent time optical sampling. In practice, data transfer and processing take most of the time during each sampling cycle, i.e., between 95% and 99% of the cycle, depending on the sampling method and the processing power of the test instrument.

Figure 5. Sampling duty cycle for electrical real-time and optical equivalent-time sampling.

The consequence of this short duty cycle is simple. The BER estimated using a modulation analyzer will be valid only if errors have a normal statistical distribution. None of the sampling approaches will allow measurement of glitches that might generate errors. In order to capture such short error bursts, a true real-time sampling device (such as a receiver) is required. The other relevant question with regards to a BER value obtained with a modulation analyzer pertains to the hardware. Since the modulation analyzer generally employs a different receiver front-end than the actual receiver used in the customer system, we need to determine if the BER measured with it is representative of the true BER that can be achieved with the receivera question whose answer relies on both the quality of the instrument and the quality of the receiver. From this perspective, the BER estimated using a modulation analyzer can only be used as a general indication of the transmission quality and certainly not as an absolute confirmation that the system is optimized and will provide the best possible performance.

Advantages of Optical Equivalent-Time Sampling


Modulation analyzers based on optical sampling, like EXFOs PSO-200, offer many key features for the characterization and optimization of 40/100 Gbit/s (and beyond) Ethernet transmitters and systems: High temporal resolution in the range of a few picoseconds Extremely low intrinsic timing jitter and phase noise No impedance mismatch created by high-speed electronic components, as the sampling is performed at the optical level and only low-speed electronics are required Enough effective bandwidth to allow characterization of signals up to 100 Gbaud

Not only can such instruments provide similar information and measurements as real-time electrical sampling solutions, but they also achieve very precise and accurate constellation diagrams, including transition information and distortion-free waveform recovery for todays and tomorrows modulation formats and transmission speeds. In essence, the waveform captured this way is the true optical waveform unimpaired by the testing instrument. Optical modulation analyzers can therefore be viewed as the golden testing instrument that can serve as a baseline and common standard in the industry.

Why Test for Chromatic Dispersion and Polarization Mode Dispersion


By: Francis Audet | Category: LTE/3G/4G, Optical/Fiber Testing | Posted date: 2010-03-03 | Views: 322

Chromatic Dispersion
Chromatic dispersion (CD) has been a known fact for almost a decade now, yet testing for it on OC-192/STM-64 is not always done de facto. This may lead to major problems, especially today due to the optical distances on the core network reaching unprecedented lengths, and also in the metro environment with the birth of the mesh network topology. But now, with the current penetration of 40 Gbit/s, or OC-768/STM-256, CD is becoming an even bigger issue. Most system providers find original ways to help deal with CD to some extent, but they cannot completely mitigate this challenge. For example, the native OC-192/STM-64 transmission format is non-return-to-zero transmission (NRZ). If this were to be used at OC768/STM-256, the CD tolerances would be such that compensation would be required after 6 to 7 km of G.652 singlemode optical fiberwhich is not a viable solution. On the other hand, no standards exist to define how 40 Gbit/s should be implemented; hence, every system provider has its own recipe, at very variable costs but also variable CD tolerance levels. The following chart shows some of these differences:

Modulation Format Symbol rate CD tolerance for 2 dB OSNR penalty Max. average PMD for 1 dB OSNR penalty

Gbaud ps/nm ps

On-Off NRZ 43.018 65 2.5

Duobinary 43.018 100 2.5

DQPSK 21.509 125 7

So even the most resistant OC-768/STM-256 (40 Gbit/s) can only tolerate a fraction of what OC-192/STM-64 (10 Gbit/s) can accept, which is approximately 1100 ps/nm. Prior to this, dispersion was compensated via simple dispersion-compensating modules (DCMs), which are deployed automatically and installed by system vendors. Plus, additional pre- or post-testing was not always required, and very seldom performed. In principle, while the DCMs included canceling the dispersion of the optical fiber spans, there are several reasons why this is not exact and why the residual dispersion must be taken into account: The change in dispersion with wavelength is often somewhat different for the transmission optical fiber than for the compensators. This means that if the dispersion of the optical fiber is exactly matched for a channel in the middle of the DWDM, there will be a mismatch for channels on the edge, which will lead to a

positive residual dispersion at one end and a negative residual dispersion at the other. While the optical fiber dispersion is fairly tightly constrained, there is a variation in the actual dispersion per unit length at a single wavelength due to optical fiber variability. For example, for G.652 optical fiber, the dispersion coefficient may vary from 16.9 to 18.2 ps/nm/km at 1550 nm. Likewise, there is a variation from module to module associated with DCMs. DCMs are usually only available in a set of values with discrete steps between them. In contrast, installation constraints (e.g., hut positions and available rights of way) mean that the span length in a practical system is often quite different from the nominal value, leading to residual dispersion at all channel wavelengths. In a mesh metro network where there is more than one possible route between two particular endpoints, the optical channel may be switched to an alternative route. In this case, while the residual dispersion of the primary path through the network may be acceptable, the residual dispersion of any alternative path that might be required must also be checked as it may be quite different.

Most of the installed optical fiber base, both in long-haul and metro networks, is standard singlemode optical fiber as defined by the ITU-T G.652. CD of such optical fiber can vary such as:

Channel Wavelength (nm) 1531.12 1546.92 1562.23

Min. (ps/nm/km) Very sensitive Very sensitive Sensitive

Max. (ps/nm/km) Sensitive Very sensitive Sensitive

In addition, the aforementioned DCM also has intrinsic uncertainty and variability. Here is an example with a 80 km DCM:

Channel Wavelength (nm) 1531.12 1546.92 1562.23

80 km DCM Min. (ps/nm) -1278 -1355 -1431

Max. (ps/nm) -1215 -1288 -1361

As previously mentioned, DCMs are usually only available in a set of values with discrete steps between themnot per kilometer. One selects the unit which is closest to the total distance to compensate for. So for a long-haul route of 495 km of optical fiber, DCMs will be applied for 500 km. To this, one needs to add the dispersion of all network elements, either erbium-doped fiber amplifiers (EDFAs), fixed or reconfigurable optical add-drop multiplexers (ROADM), etc. Typical perchannel CD for such elements is in the 30 ps/nm range. Making the calculation for 495 km of optical fiber, plus six DCMs of 80 km and one of 20 km, plus four network elements (EDFAs), we get: 1531.12 nm minimum residual dispersion = 15.69 * 495 + 6.25 1278 + 4 30 = 341 ps/nm

1531.12 nm maximum residual dispersion = 17.10 * 495 + 6.25 1215 + 4 30 = 990.75 ps/nm

These results and those for the other wavelengths are summarized in the table below:

Maximum and Minimum End-to-End Residual Dispersion Values (Worst Case) Min. Channel Wavelength (nm) (ps/nm) 1531.12 -341 1546.92 -342 1562.23 -361
These values are very close to the acceptable limit for 10 Gbit/s transmission. The calculation can easily be redone with slightly different parameters (a few extra ROADMs in a mesh network, longer links in a long-haul network, etc.) and be over the acceptable limit for 10 Gbit/s. But as for 40 Gbit/s, depending on the implementation, the typical acceptable CD is often less than 125 ps/nm, making this residual CD much too high for 40 Gbit/s transmission, even though each individual 80 km section may be within tolerance. The advent of longer optical links and the birth of mesh networks put the poorly compensated residual CD very close to the danger zone for 10 Gbit/s transmission, and beyond the acceptable limit for 40 Gbit/s transmission. On the other hand, accurately testing for this parameter allows one to fine-tune each stage of compensation, as well as perform (if required) end-to-end compensation adjustments to remove excess residual dispersion.

Max. (ps/nm) 991 994 974

Polarization Mode Dispersion


Polarization mode dispersion (PMD) is a consequence of certain physical properties of optical fiber that result in distortion of optical pulses. These distortions result in dispersion of the optical pulse over time, as well as a reduction in peak power. While the outcome of PMDinduced distortion is similar to CD, it is not accumulated linearly, but stochastically. This means that its value cannot be predicted from one instant to the next, but its distribution (occurrence within a predictable range) can. Complicating factors that contribute to this unpredictability are as diverse as temperature change on optical fiber, wind load and turbulence on aerial optical fiber, and ice loading changing stresses on aerial optical fiber. One additional and significant constraint associated with characterizing and measuring PMD on optical fiber is the date of manufacture. In fact, optical fiber manufactured before approximately 1994 may not have been designed with PMD mitigation in mind. Because PMD is somewhat random (stochastic) and because many legacy networks have fiber manufactured before 1994, it is important to understand its contribution to signal degradation and minimize its impact. In short, PMD is not stable or generally predictable based on any known physical characteristic of the optical fiber at the time of manufacture. It must be measured in situ, i.e., after installation. Many transmission-equipment manufacturers are working on new methods and techniques to minimize the impact of PMD on digital signals transmitted over a variety of optical fiber types. The dominant course of investigation centers on modulation methods. By carefully choosing modulation and demodulation methods, vendors hope to minimize the impact of PMD on the quality of their digital transmissions.

While work continues on modulation formats and methods, these general thresholds must be applied to the decision-making process regarding the fitness of optical fiber links for specific modulation rates.

Bit Rate (Gbit/s) 2.5 10 20 40

Max. PMD (ps) 40 10 (with no FEC) 5 2.5

PMD Coefficient (ps/(sqrt(km)) <2.0 <0.5 <0.25 <0.125

Decision-Making Matrix
Deciding whether or not one should characterize a fiber-optic link or route for dispersion (both CD and PMD) will quickly become a challenging exercise. The dilemma arises when there is significant pressure to cut corners to save time or money. However, one must consider the following: a DWDM route or ring populated with 10 Gbit/s data rates will likely have one or more 40 Gbit/s wavelengths added in the future. At that time, it will be nearly impossible to temporarily remove dozens of active wavelengths from service to characterize the optical fiber carrying them. This one factor alone should motivate all service providers to fully characterize their optical fiber links while they can. Otherwise, they may find themselves with limited possibilities in the very near future and unable to increase bandwidth on their optical fiber links. Given that several factors must be considered before choosing to test an optical link for dispersion, this matrix will help with the decision-making process.

Conditions Increasing bandwidth to 40 Gbit/s Increasing bandwidth to 10 Gbit/s Network contains pre-1994 fiber DWDM network designed for 10 Gbit/s per wavelength but may move to 40 Gbit/s on one or more wavelengths in the future

<30 km Always Recommended Always Always

>30 km Always Always Always Always

It has been said that characterizing an optical fiber network is optional for short links. In fact, not knowing the other characteristics of the link, such as link loss and optical return loss, can and often will lead to the inability to fully utilize the amazing amount of bandwidth available in optical fiber. So, given the conditions that require testing dispersion and the fact that other physical parameters must be evaluated, it is prudent that service providers test and fully characterize all fiber links as early as possible in their normal lifecycle.

Why Test for Chromatic Dispersion and Polarization Mode Dispersion

By: Francis Audet | Category: LTE/3G/4G, Optical/Fiber Testing | Posted date: 2010-03-03 | Views: 323

Chromatic Dispersion
Chromatic dispersion (CD) has been a known fact for almost a decade now, yet testing for it on OC-192/STM-64 is not always done de facto. This may lead to major problems, especially today due to the optical distances on the core network reaching unprecedented lengths, and also in the metro environment with the birth of the mesh network topology. But now, with the current penetration of 40 Gbit/s, or OC-768/STM-256, CD is becoming an even bigger issue. Most system providers find original ways to help deal with CD to some extent, but they cannot completely mitigate this challenge. For example, the native OC-192/STM-64 transmission format is non-return-to-zero transmission (NRZ). If this were to be used at OC768/STM-256, the CD tolerances would be such that compensation would be required after 6 to 7 km of G.652 singlemode optical fiber which is not a viable solution. On the other hand, no standards exist to define how 40 Gbit/s should be implemented; hence, every system provider has its own recipe, at very variable costs but also variable CD tolerance levels. The following chart shows some of these differences:

Modulation Format Symbol rate CD tolerance for 2 dB OSNR penalty Max. average PMD for 1 dB OSNR penalty

Gbaud ps/nm ps

On-Off NRZ 43.018 65 2.5

Duobinary 43.018 100 2.5

DQPSK 21.509 125 7

So even the most resistant OC-768/STM-256 (40 Gbit/s) can only tolerate a fraction of what OC-192/STM-64 (10 Gbit/s) can accept, which is approximately 1100 ps/nm. Prior to this, dispersion was compensated via simple dispersion-compensating modules (DCMs), which are deployed automatically and installed by system vendors. Plus, additional pre- or post-testing was not always required, and very seldom performed. In principle, while the DCMs included canceling the dispersion of the optical fiber spans, there are several reasons why this is not exact and why the residual dispersion must be taken into account: The change in dispersion with wavelength is often somewhat different for the transmission optical fiber than for the compensators. This means that if the dispersion of the optical fiber is exactly matched for a channel in the middle of the DWDM, there will be a mismatch for channels on the edge, which will lead to a

positive residual dispersion at one end and a negative residual dispersion at the other. While the optical fiber dispersion is fairly tightly constrained, there is a variation in the actual dispersion per unit length at a single wavelength due to optical fiber variability. For example, for G.652 optical fiber, the dispersion coefficient may vary from 16.9 to 18.2 ps/nm/km at 1550 nm. Likewise, there is a variation from module to module associated with DCMs. DCMs are usually only available in a set of values with discrete steps between them. In contrast, installation constraints (e.g., hut positions and available rights of way) mean that the span length in a practical system is often quite different from the nominal value, leading to residual dispersion at all channel wavelengths. In a mesh metro network where there is more than one possible route between two particular endpoints, the optical channel may be switched to an alternative route. In this case, while the residual dispersion of the primary path through the network may be acceptable, the residual dispersion of any alternative path that might be required must also be checked as it may be quite different.

Most of the installed optical fiber base, both in long-haul and metro networks, is standard singlemode optical fiber as defined by the ITU-T G.652. CD of such optical fiber can vary such as:

Channel Wavelength (nm) 1531.12 1546.92 1562.23

Min. (ps/nm/km) Very sensitive Very sensitive Sensitive

Max. (ps/nm/km) Sensitive Very sensitive Sensitive

In addition, the aforementioned DCM also has intrinsic uncertainty and variability. Here is an example with a 80 km DCM:

Channel Wavelength (nm) 1531.12 1546.92 1562.23

80 km DCM Min. (ps/nm) -1278 -1355 -1431

Max. (ps/nm) -1215 -1288 -1361

As previously mentioned, DCMs are usually only available in a set of values with discrete steps between themnot per kilometer. One selects the unit which is closest to the total distance to compensate for. So for a long-haul route of 495 km of optical fiber, DCMs will be applied for 500 km. To this, one needs to add the dispersion of all network elements, either erbium-doped fiber amplifiers (EDFAs), fixed or reconfigurable optical add-drop multiplexers (ROADM), etc. Typical perchannel CD for such elements is in the 30 ps/nm range. Making the calculation for 495 km of optical fiber, plus six DCMs of 80 km and one of 20 km, plus four network elements (EDFAs), we get: 1531.12 nm minimum residual dispersion = 15.69 * 495 + 6.25 1278 + 4 30 = 341 ps/nm

1531.12 nm maximum residual dispersion = 17.10 * 495 + 6.25 1215 + 4 30 = 990.75 ps/nm

These results and those for the other wavelengths are summarized in the table below:

Maximum and Minimum End-to-End Residual Dispersion Values (Worst Case) Min. Channel Wavelength (nm) (ps/nm) 1531.12 -341 1546.92 -342 1562.23 -361
These values are very close to the acceptable limit for 10 Gbit/s transmission. The calculation can easily be redone with slightly different parameters (a few extra ROADMs in a mesh network, longer links in a long-haul network, etc.) and be over the acceptable limit for 10 Gbit/s. But as for 40 Gbit/s, depending on the implementation, the typical acceptable CD is often less than 125 ps/nm, making this residual CD much too high for 40 Gbit/s transmission, even though each individual 80 km section may be within tolerance. The advent of longer optical links and the birth of mesh networks put the poorly compensated residual CD very close to the danger zone for 10 Gbit/s transmission, and beyond the acceptable limit for 40 Gbit/s transmission. On the other hand, accurately testing for this parameter allows one to fine-tune each stage of compensation, as well as perform (if required) end-to-end compensation adjustments to remove excess residual dispersion.

Max. (ps/nm) 991 994 974

Polarization Mode Dispersion


Polarization mode dispersion (PMD) is a consequence of certain physical properties of optical fiber that result in distortion of optical pulses. These distortions result in dispersion of the optical pulse over time, as well as a reduction in peak power. While the outcome of PMDinduced distortion is similar to CD, it is not accumulated linearly, but stochastically. This means that its value cannot be predicted from one instant to the next, but its distribution (occurrence within a predictable range) can. Complicating factors that contribute to this unpredictability are as diverse as temperature change on optical fiber, wind load and turbulence on aerial optical fiber, and ice loading changing stresses on aerial optical fiber. One additional and significant constraint associated with characterizing and measuring PMD on optical fiber is the date of manufacture. In fact, optical fiber manufactured before approximately 1994 may not have been designed with PMD mitigation in mind. Because PMD is somewhat random (stochastic) and because many legacy networks have fiber manufactured before 1994, it is important to understand its contribution to signal degradation and minimize its impact. In short, PMD is not stable or generally predictable based on any known physical characteristic of the optical fiber at the time of manufacture. It must be measured in situ, i.e., after installation. Many transmission-equipment manufacturers are working on new methods and techniques to minimize the impact of PMD on digital signals transmitted over a variety of optical fiber types. The dominant course of investigation centers on modulation methods. By carefully choosing modulation and demodulation methods, vendors hope to minimize the impact of PMD on the quality of their digital transmissions.

While work continues on modulation formats and methods, these general thresholds must be applied to the decision-making process regarding the fitness of optical fiber links for specific modulation rates.

Bit Rate (Gbit/s) 2.5 10 20 40

Max. PMD (ps) 40 10 (with no FEC) 5 2.5

PMD Coefficient (ps/(sqrt(km)) <2.0 <0.5 <0.25 <0.125

Decision-Making Matrix
Deciding whether or not one should characterize a fiber-optic link or route for dispersion (both CD and PMD) will quickly become a challenging exercise. The dilemma arises when there is significant pressure to cut corners to save time or money. However, one must consider the following: a DWDM route or ring populated with 10 Gbit/s data rates will likely have one or more 40 Gbit/s wavelengths added in the future. At that time, it will be nearly impossible to temporarily remove dozens of active wavelengths from service to characterize the optical fiber carrying them. This one factor alone should motivate all service providers to fully characterize their optical fiber links while they can. Otherwise, they may find themselves with limited possibilities in the very near future and unable to increase bandwidth on their optical fiber links. Given that several factors must be considered before choosing to test an optical link for dispersion, this matrix will help with the decision-making process.

Conditions Increasing bandwidth to 40 Gbit/s Increasing bandwidth to 10 Gbit/s Network contains pre-1994 fiber DWDM network designed for 10 Gbit/s per wavelength but may move to 40 Gbit/s on one or more wavelengths in the future

<30 km Always Recommended Always Always

>30 km Always Always Always Always

It has been said that characterizing an optical fiber network is optional for short links. In fact, not knowing the other characteristics of the link, such as link loss and optical return loss, can and often will lead to the inability to fully utilize the amazing amount of bandwidth available in optical fiber. So, given the conditions that require testing dispersion and the fact that other physical parameters must be evaluated, it is prudent that service providers test and fully characterize all fiber links as early as possible in their normal lifecycle.

You might also like