Professional Documents
Culture Documents
G-Series
Product Description
Document Version: 21
October 2009
Notice
This document contains information that is proprietary to Ceragon Networks Ltd.
No part of this publication may be reproduced, modified, or distributed without prior written authorization of
Ceragon Networks Ltd.
This document is provided as is, without warranty of any kind.
Registered Trademarks
Ceragon Networks® , FibeAir® and CeraView® are registered trademarks of Ceragon Networks Ltd.
Other names mentioned in this publication are owned by their respective holders.
Trademarks
CeraMapTM, ConfigAirTM, PolyViewTM, EncryptAirTM, CeraMonTM, EtherAirTM, and MicroWave FiberTM, are
trademarks of Ceragon Networks Ltd.
Other names mentioned in this publication are owned by their respective holders.
Statement of Conditions
The information contained in this document is subject to change without notice.
Ceragon Networks Ltd. shall not be liable for errors contained herein or for incidental or consequential
damage in connection with the furnishing, performance, or use of this document or equipment supplied with
it.
Information to User
Any changes or modifications of equipment not expressly approved by the manufacturer could void the
user’s authority to operate the equipment and the warranty for such equipment.
Copyright © 2009 by Ceragon Networks Ltd. All rights reserved.
Features ........................................................................................................................... 5
Advantages.................................................................................................................... 11
Applications................................................................................................................... 12
Specifications................................................................................................................ 75
IP-10 follows in the tradition of Ceragon's Native2, which allows your network to benefit from both native
TDM and native Ethernet using the same radio. Flexible bandwidth sharing between the TDM and Ethernet
traffic ensures optimal throughput for all your media transfer needs.
With the Metro Ethernet Networking trend growing, IP-10 is poised to fill in the gap and deliver high
capacity IP communication quickly, easily, and reliably.
nXT1/E1
n X T1/E1
MEN
ETH
Control
IP-10 features impressive market-leading throughput capability together with advanced networking
functionality.
Some of the quick points that place IP-10 at the top of the wireless IP offerings:
Supports throughputs of from 10 to 500 Mbps per radio carrier (QPSK to 256 QAM)
In addition, using unique Adaptive Coding & Modulation (ACM), your network benefits from non-stop,
dependable, capacity deliverance.
Radio capacity:
All licensed bands: L6, U6, 7, 8, 10, 11, 13, 15, 18, 23, 26, 28, 32, 38 GHz
Highest scalability: From 10 Mbps to 500 Mbps, using the same hardware, including the same
ODU/RFU!
TDM Voice Transmission with Dynamic Allocation - With the n x E1/T1 option, only enabled E1/T1
ports are allocated with capacity. The remaining capacity is dynamically allocated to the Ethernet ports
to ensure maximum Ethernet capacity.
600
th
wid
500
n d
Ba
nel
Capacity [Mbps]
400
h a n
t any C
300 y a
ap acit
e st C
High
200
100
0
7 10 14 20 28/30 40 50 56
Channel Bandwidth [MHz]
At the heart of the IP-10 solution is Ceragon's market-leading Native2 microwave technology.
With this technology, the microwave carrier supports native IP/Ethernet traffic together with optional native
PDH. Neither traffic type is mapped over the other, while both dynamically share the same overall
bandwidth.
This unique approach allows you to plan and build optimal all-IP or hybrid TDM-IP backhaul networks
which make it ideal for any RAN (Radio Access Network) evolution path selected by the wireless provider
(including Green-Field 3.5G/4G all-IP installations).
In addition, Native2 ensures:
Very low overhead mapping for both Ethernet and TDM traffic, to the microwave radio frame.
ACM employs the highest possible modulation during changing environmental conditions, which may be
from QPSK to 256 QAM.
8 modulation/coding work points (~3 db system gain for each point change)
An integrated QoS mechanism enables intelligent congestion management to ensure that your high
priority traffic is not affected during link fading.
Each T1/E1 is assigned a priority to enable differentiated T1/E1 dropping during severe link degradation.
services
Smart Pipe - In this mode, Ethernet switching functionality is disabled and only a single Ethernet interface
is enabled for user traffic. The unit effectively operates as a point-to-point Ethernet microwave radio.
Traffic shaping
64 45%
96 29%
128 22%
256 11%
512 5%
Capacity/ACM statistics:
Utilization statistics:
- # of seconds in an interval, during which radio link utilization was above the user-configured
threshold
In-Band Management
IP-10 can optionally be managed in-band, via its radio and Ethernet interfaces. This method of management
eliminates the need for a dedicated interface and network.
Each T1/E1 trail carries a native TDM clock, which is compliant with strict cellular application requirements
(2G/3G), and is suitable as a base station timing source.
This eliminates the need for timing-over-packet techniques for base station synchronization.
Experience Counts
IP-10 was designed with continuity in mind. It is based on Ceragon’s well-established and field-proven
IP-MAX Ethernet microwave technology.
With Ceragon's large install base, years of experience in high-capacity IP radios, and seamless integration
with all standard IP equipment vendors, IP-10 is poised to be an IP networking standard-bearer.
Native2
With Native2, you get optimal all-IP or hybrid TDM-IP backhaul networking - ideal for any RAN evolution
path!
Mobile backhaul
Cellular Networks
FibeAir IP-10 family supports both Ethernet and TDM for cellular backhaul network migration to IP, within
the same compact footprint. The system is suitable for all migration scenarios where carrier-grade Ethernet
and legacy TDM services are required simultaneously.
WiMAX Networks
Enabling connectivity between WiMAX base stations and facilitating the expansion and reach of emerging
WiMAX networks, FibeAir IP-10 provides a robust and cost-efficient solution with advanced native Ethernet
capabilities.
FibeAir IP-10 family offers cost-effective, high-capacity connectivity for carriers in cellular, WiMAX and
fixed markets. The FibeAir IP-10 platform supports multi-service and converged networking requirements
for both legacy and the latest data-rich applications and services.
Ceragon’s FibeAir IP-10 delivers integrated high speed data, video and voice traffic in the most optimum and
cost-effective manner. Operators can leverage FibeAir IP-10 to build a converged network infrastructure
based on high capacity microwave to support multiple types of service.
FibeAir IP-10 is fully compliant with MEF-9 & MEF-14 standards for all service types (EPL, EVPL and E-
LAN) making it the ideal platform for operators looking to provide high capacity Carrier Ethernet services
meeting customers demand for coverage and stringent SLA.
Dimensions
Main Interfaces:
5 x 10/100Base-T
16 x T1/E1 (optional)
Additional Interfaces:
o 16 x E1
o 16 x T1
o 1 x STM-1/OC-3
16 x E1/T1 T-Card
The T-cards are field-upgradable, and add a new dimension to the FibeAir IP-10 migration flexibility.
Terminal console
PROT: Ethernet protection control interface (for 1+1 HSB mode support)
TDM options:
o Ethernet only (no TDM)
o Ethernet + 16 x E1 + T-Card Slot
o Ethernet + 16 x T1 + T-Card Slot
With or without AUX package (EOW, User channel)
XPIC support
Sync unit
Ceragon Networks employs full-range dynamic ACM in its new line of high-capacity wireless backhaul
product - FibeAir IP-10. In order to ensure high transmission quality, Ceragon solutions implement
hitless/errorless ACM that copes with 90 dB per second fading. A quality of service awareness mechanism
ensures that high priority voice and data packets are never “dropped”, thus maintaining even the most
stringent service level agreements (SLAs).
The hitless/errorless functionality of Ceragon’s ACM has another major advantage in that it ensures that
TCP/IP sessions do not time-out. Lab simulations have shown that when short fades occur (for example if a
system has to terminate the signal for a short time to switch between modulations) they may lead to timeout
of the TCP/IP sessions – even when the interruption is only 50 milliseconds. TCP/IP timeouts are followed
by a drastic throughput decrease over the time it takes for the TCP sessions to recover. This may take as long
as several seconds. With a hitless/errorless ACM implementation this problem can be avoided.
So how does it really work? Let's assume a system configured for 128 QAM with ~170 Mbps capacity over a
28 MHz channel. When the receive signal Bit Error Ratio (BER) level arrives at a predetermined threshold,
the system will preemptively switch to 64 QAM and the throughput will be stepped down to ~140 Mbps.
This is an errorless, virtually instantaneous switch. The system will then run at 64 QAM until the fading
condition either intensifies, or disappears. If the fade intensifies, another switch will take the system down to
32 QAM. If on the other hand the weather condition improves, the modulation will be switched back to the
next higher step (e.g. 128QAM) and so on, step by step .The switching will continue automatically and as
quickly as needed, and can reach during extreme conditions all the way down to QPSK.
Rx 256 QAM
level
99.9 %
128 QAM
99.95 %
64 QAM
99.99 %
32 QAM
99.995 %
16 QAM
99.999 %
QPSK
200 170 200 140 100 200 120 200 Mbps Capacity
(@ 28 MHz channel)
Unavailability
The system examines the incoming traffic and assigns the desired priority according to the marking of the
packets (based on the user port/L2/L3 marking in the packet). In case of congestion in the ingress port, low
priority packets will be discarded first.
The user has the following classification options:
Source Port
VLAN 802.1p
VLAN ID
IPv4 TOS/DSCP
IPv6 Traffic Class
The system has four priority queues that are served according to three types of scheduling, as follows:
Strict priority: all top priority frames egress towards the radio until the top priority queue is empty. Then,
the next lowest priority queue’s frames egress, and so on. This approach ensures that high priority frames
are always transmitted as soon as possible.
Weighted Round Robin (WRR): each queue can be assigned with a user-configurable weight from 1 to
32.
Hybrid: One or two highest priority queues as "strict" and the other according to WRR
The statistics that can be displayed within each group include the following:
Notes:
• The following statistics are displayed every 15 minutes (in the Radio and E1/T1 performance
monitoring windows):
• Utilization - four utilizations: ingress line receive, ingress radio transmit, egress radio receive, and
egress line transmit
Integrated Web Based Element Manager: Each device includes an HTTP based element manager that
enables the operator to perform element configuration, RF, Ethernet, and PDH performance monitoring,
remote diagnostics, alarm reports, and more.
PolyView™ is Ceragon's NMS server that includes CeraMap™ its friendly and powerful client graphical
interface. PolyView can be used to update and monitor network topology status, provide statistical and
inventory reports, define end-to-end traffic trails, download software and configure elements in the network.
In addition, it can integrate with Northbound NMS platforms, to provide enhanced network management.
The application is written in Java code and enables management functions at both the element and network
levels. It runs on Windows 2000/2003/XP/Vista and Sun Solaris.
FibeAir RFU-C
FibeAir RFUs support multiple capacities, frequencies, modulation schemes, and configurations for various
network requirements.
The RFUs operate in the frequency range of 6-38 GHz, and support capacities of from 10 Mbps to 500
Mbps, for TDM and IP interfaces.
The first native Ethernet services to emerge were point to point-based, followed by emulated LAN
(multipoint to multipoint-based). Services were first defined and limited to metro area networks. They
have now been extended across wide area networks and are available worldwide from many service
providers.
The term "carrier Ethernet" implies that Ethernet services are "carrier grade". The benchmark for carrier
grade was set by the legacy TDM telephony networks, to describe services that achieve "five nines
(9.9999%)" uptime. Although it is debatable whether carrier Ethernet will reach that level of reliability,
the goal of one particular standards organization is to accelerate the development and deployment of
services that live up to the name.
Carrier Ethernet is poised to become the major component of next-generation metro area networks,
which serve as the aggregation layer between customers and core carrier networks. A metro Ethernet
network, which uses IP Layer 3 MPLS forwarding, is currently the primary focus of carrier Ethernet
activity.
The standard service types for Carrier Ethernet include:
E-Line Service
This service is employed for Ethernet private lines, virtual private lines, and Ethernet Internet access.
The Metro Ethernet Forum (MEF) is a global industry alliance started in 2001. In 2005, the MEF committed
to this new carrier standard, and launched a Carrier Ethernet Certification Program to facilitate delivery of
services to end users.
The MEF 6 specification defines carrier Ethernet as "A ubiquitous, standardized, carrier-class Service
and Network defined by five attributes that distinguish it from familiar LAN based Ethernet". The five
attributes include:
Standardized Services
Service Management
Scalability
Reliability
For service providers, the technology convergence of Carrier Ethernet ensures a decrease in CAPEX and
OPEX.
Access networks employ Ethernet to provide backhaul for IP DSLAMs, PON, WiMAX, and direct
Ethernet over fiber/copper.
Flexible Layer 2 VPN services, such as private line, virtual private line, or emulated LAN, offer new
revenue streams.
For Enterprises, a reduction in cost is achieved through converged networks for VoIP, data, video
conferencing, and other services.
In addition, Ethernet standardization reduces network complexity.
IP-10
Ethernet
Radio
User Interface
Interfaces
Carrier Ethernet
Switch
IP-10
Ethernet
Radio
User Interface
Interface
FibeAir IP-10 is equipped with an extensive Carrier Ethernet feature set which eliminates the need for an
external switch.
MEF Certified
The Metro Ethernet Forum (MEF) runs a Certification Program with the aim of promoting the deployment of
Carrier Ethernet in Access Networks, MANs, and WANs. The program offers certification for Carrier
Ethernet equipment supplied to service providers.
The program covers the following areas:
MEF-9: Service certification
MEF-14: Traffic management and service performance
FibeAir IP-10 is fully MEF-9 & MEF-14 certified for all Carrier Ethernet services (E-Line & E-LAN).
Standardized Services MEF-9 and MEF-14 certified for all service types (EPL, EVPL, and E-
LAN)
Scalability - Up to 500 Mbps per radio carrier
- Integrated non-blocking switch with 4K VLANs
- 802.1ad provider bridges (QinQ)
- Scalable nodal solution
- Scalable networks (1000s of NEs)
Quality of Service (QoS) - Advanced CoS classification
- Advanced traffic policing/rate-limiting
- CoS based packet queuing/buffering
- Flexible scheduling schemes
- Traffic shaping
Reliability - Highly reliable & integrated design
- Fully redundant 1+1 HSB & nodal configurations
- Hitless ACM (QPSK - 256 QAM) for enhanced radio link availability
- Wireless Ethernet Ring (RSTP based)
The following illustration shows the QoS flow of traffic with IP-10 operating in Smart Pipe mode.
The following illustration shows the QoS flow of traffic with IP-10 operating in Metro Switch mode.
Carrier-class Ethernet rings offer topologies built for resiliency, redundancy throughout the core, distribution
and access, and a self-healing architecture that can repair potential problems before they reach end users.
Such rings are designed for increased capacity, performance, and scalability, with beneficial increased value,
stability, and a reduction in costs.
By implementing Carrier-Class Ethernet rings, providers are able to expand their LANs to WANs.
The following illustration is a basic example of an IP-10 wireless Carrier Ethernet ring.
RSTP algorithms are designed to create loop-free topologies in any network design, which makes it sub-
optimal to ring topologies.
In a general topology, there can be more than one loop, and therefore more than one bridge with ports in a
blocking state. For this reason, RSTP defines a negotiation protocol between each two bridges, and
processing of the BPDU (Bridge Protocol Data Units), before each bridge propagates the information. This
"serial" processing increases the convergence time.
In a ring topology, after the convergence of RSTP, only one port is in a blocking state. We can therefore
enhance the protocol for ring topologies, and transmit the notification of the failure to all bridges in the ring
(by broadcasting the BPDU).
Ceragon's IP-10 G supports Wireless Carrier Ethernet Ring topologies. A typical ring constructed by IP10 is
shown in the following illustration.
Ceragon's IP-10 supports native Ethernet rings of up to 500 Mbps in 1+0, and can reach Gigabit capacity in a
2+0 configuration with XPIC.
Ceragon's ring solution enhances the RSTP algorithm for ring topologies, so that failure propagation is much
faster than the regular RSTP. Instead of serially propagation link by link, the failure is propagated in parallel
to all bridges. In this way, the bridges that have ports in alternate states immediately place them in the
forwarding state.
The following illustration shows an example of such a ring.
Performance monitoring
The IEEE 802.1ag standard defines Service Layer OAM (Connectivity Fault Management). The standard
facilitates the discovery and verification of a path through 802.1 bridges and local area networks (LANs).
• Defines maintenance domains, their constituent maintenance points, and the managed objects
required to create and administer them.
• Defines the relationship between maintenance domains and the services offered by VLAN-aware
bridges and provider bridges.
• Describes the protocols and procedures used by maintenance points to maintain and diagnose
connectivity faults within a maintenance domain.
• Provides means for future expansion of the capabilities of maintenance points and their protocols.
FibeAir IP-10 utilizes these protocols to maintain smooth system operation and non-stop data flow.
The following is a series of illustrations showing how FibeAir IP-10 is used to facilitate Carrier Ethernet
Services. The second and third illustrations show how IP-10 handles a node failure.
Phase/Frequency Lock
Applicable to GSM and UMTS-FDD networks.
- Limits channel interference between carrier frequency bands.
- Typical performance target: frequency accuracy of < 50 ppb.
Sync is the traditional technique used, with traceability to a PRS master clock carried over PDH/SDH
networks, or using GPS.
Phase Lock with Latency Correction
Applicable to CDMA, CDMA-2000, UMTS-TDD, and WiMAX networks.
- Limits coding time division overlap.
- Typical performance target: frequency accuracy of < 20 - 50 ppb, phase difference of
< 1-3 msecs.
GPS is the traditional technique used.
“ToP-aware” transport
SyncE
Using this technique, each T1/E1 trail carries a native TDM clock, which is compliant with GSM and UMTS
synchronization requirements.
Ceragon's IP-10 implements PDH-like mechanism for providing the high precision synchronization of the
native TDM trails. This implementation ensures high-quality synchronization while keeping cost &
complexity low since it eliminates the need for sophisticated centralized SDH-grade "clock unit" at each
node. System is designed to deliver E1 traffic and recover E1 clock, complying with G.823 “synchronization
port” jitter and wander. That means that user can use any (or all) of the system’s E1 interfaces in order to
deliver synchronization reference via the radio to remote site (e.g. Node-B).
Each trail is independent of the other, meaning that IP-10 does not imply any restrictions on the source of the
TDM trails. (Meaning that each trail can have its own clock, and no synchronization between trails is
assumed).
ToP-Aware Transport
Ceragon's integrated advanced QoS classifier supports the identification of standard ToP control packets
(IEEE1588v2 packets), and assigns to them the highest priority/traffic class.
This ensures that ToP control packets will be transported with maximum reliability and minimum delay, to
provide the best possible timing accuracy.
The clock for SyncE interfaces can be derived from any co-located traffic-carrying E1 interface at the BTS
site.
Synchronization is distributed natively over the radio links. In this mode, no TDM trails or E1 interfaces at
the tail sites are required!
Synchronization is provided by the E1/STM-1 clock source input at the fiber hub site (SSU/GPS).
IP-10 design for the nodal solution is based on a "blade" approach. Viewing the unit from the rear, each IDU
can be considered a "blade" within a nodal enclosure.
Additional nodal enclosures and units can be added in the field as required, without affecting traffic.
Up to six 1RU units (three adapters) can be stacked to form a single unified nodal device.
Using the stacking method, units in the bottom nodal enclosure act as main units, whereby a mandatory
active main unit can be located in either of the two slots, and an optional standby main unit can be installed
in the other slot.
The switchover time is <50 msecs for all traffic affecting functions.
Units located in nodal enclosures other than the one on the bottom act as expansion units.
Radios in each pair of units can be configured as either dual independent 1+0 links, or single fully-redundant
1+1 HSB links.
The nodal enclosure is a scalable unit. Each enclosure can be added to another enclosure for modular rack
installation.
If a failure occurs, the backup main unit takes over (<50 msecs down time).
Each E1/T1 interface or "logical interface" in a radio in any unit in the stack can be assigned to any VC.
The XC is performed between two interfaces or "logical interfaces" with the same VC.
XC functionality is fully flexible. Any pair of E1/T1 interfaces, or radio "logical interfaces", can be
connected.
Ethernet Bridging
Ethernet traffic in an XC configuration is supported by interconnecting IDU switches with external cables.
Traffic flow (dropping to local ports, sending to radio) is performed by the switches, in accordance with
learning tables.
Other than an extra FE port, dual GBE ports, and link-aggregation, no other functionality is required for XC
operation.
The FE protection port is static (only used for protection, not traffic). Its switching is performed
electrically. If the unit is a stand-alone, an external connection is made through the front panel. If the unit
is connected to a backplane, the connection is through the backplane, while the front panel port is
unused.
The GBE ports are dual: RJ-45 electrical or SFP optical (default). Optical ports can optionally be
configured as 100FX.
The system is composed of several inter-connected (stacked) IDUs, with integrated and centralized TDM
traffic switching and Ethernet bridging capability.
The XC capacity is 75 E1 VCs (Virtual Containers) or 84 T1 VCs, whereby each E1/T1 interface or
"logical interface" in a radio in any unit of the stack can be assigned to any VC.
XC Features
Cross Connect system highlights include:
XC is performed between any two physical or logical interfaces in the node, including:
- E1/T1 interface
In a failure occurs, backup main unit takes over (<50 msecs down time)
Basic XC Operation
As shown in the illustration, trails are defined from one end of a line to the other. The XC forwards
signals generated by the radios to/from the IDUs based on their designated VCs. As in the example, The
cross connect may forward signals on Trail C from Radio 1, VC 3 to Radio 4, VC 1.
STM1/OC3
STM1/OC3
Interface
Interface
E1/T1
Interface E1/T1
Interfaces
E1/T1 trails are supported based on the integrated E1/T1 cross-connect (XC).
XC is performed between any two physical or logical interfaces in the node (in any main or expansion
unit) such as E1/T1 interface, radio VC (75 VCs supported per radio carrier), and STM1/OC3 mux
VC11/VC12. The function is performed by the “active” main unit. If a failure occurs, the backup main
unit takes over (<50 msecs down time).
For each trail, the following end-to-end OA&M functions are supported:
A VC overhead is added to each VC trail to support the end-to-end OA&M functionality and
synchronization justification requirements.
STM1/OC3
Interface
IP-10
Integrated
XC
IP-10 integrated
STM1/OC3 Mux
MW
Radio
Link
XC operation is implemented using two-unit backplanes, which provide the interconnectivity. Up to three
backplanes, consisting of six IDUs, can be stacked to provide an expandable system.
Each modular shelf holds two IDUs. The shelf includes extension connectors located at its top and bottom
panels, which allow stacking of up to three shelves (the base shelf is different from the two extension
shelves), holding up to six IDUs, which exchange TDM traffic and compose a network node. Each pair of
IDUs in a single modular shelf has access to Multi-Radio and XPIC interfaces between them.
A node composed of identical IDUs that behave in a different way, is formed by inserting the IDUs in the
stackable shelves and providing each IDU with an indication of its place in the stack. Each IDU uses
different LVDS (Low-Voltage Differential Signaling) interfaces, depending on its place in the stack and
system configuration.
If a failure occurs, the backup main unit takes over within <50 msecs.
The XC function is performed between two logical interfaces with the same VC (Virtual Container). The
functionality is fully flexible, so that any pair of E1/T1 interfaces, or radio logical interfaces, can be
connected.
TDM XC
TDM cross-connect is implemented by transporting all received TDM traffic from each IDU to the main XC
unit placed in a pre-determined slot (or to two protected XC units). The main unit performs XC of individual
E1/T1 streams between the other IDUs and its own interfaces, and sends back E1/T1 streams. Each unit then
directs each stream to its interfaces or radio.
Using dedicated LVDS (Low Voltage Differential Signal) serial interfaces, the TDM streams are transported
via the backplane between the XC and downlink IDUs. The interfaces carry the E1s/T1s in a proprietary
TDM frame containing each E1/T1 in a separate time-slot (TS). The interfaces are point-to-point between
each downlink IDU and the main XC.
There is an additional, parallel LVDS infrastructure from each unit to the main XC stand-by unit for
protection purposes.
Each of the main XC units has its own local clock, which is distributed to each of the downlink units through
an LVDS interface. Downlink units align traffic to the clock received from the active XC.
East-West configuration between the two XC units (adjacent) is achieved by configuring the second (upper)
unit in the main backplane to behave like a regular downlink. This is the case if the XC units are not
configured in protection. For this purpose, additional LVDS traffic and clock channels are set up between
them.
The IDU’s behavior as a main XC or a downlink depends on its position (main or extension backplane, and
upper/lower position in the backplane) which is detected by hardware through backplane slot ID pins, as well
as by user configuration. In addition, an IDU can be configured as a stand-alone unit.
1. The XC sends received E11/T1s to downlink units in LVDS (Low-Voltage Differential Signaling) time
slots, which then discard the unnecessary slots.
2. Each unit (XC included) maps each relevant LVDS time slot to radio VCs or line interfaces.
For each line interface, the user defines which time slot it is mapped to, and for each radio, which radio VCs
it transports (enabled radio VCs) and which time slot it is mapped to. Two interfaces mapped to the same
time slots are known as a trail.
Each IDU has several LVDS interfaces, some of which are disabled at the downlink units.
All LVDS traffic is synchronized to a single clock provided by the active XC unit. The clock is transmitted
to the downlink units via the LVDS infrastructure.
Due to the fact that XC system users can build networks and define E1/T1 trails across the network,
additional PM (performance monitoring) is necessary. A trail is defined as E1/T1 data delivered unchanged
from one line interface to another, through one or more radio links.
In each XC node, data can be assigned to a different VC number, but its identity across the network is
maintained by a “Trail ID” defined by the user.
Additional PM functionality provides end-to-end monitoring over data sent in a trail over the network.
IP-10 supports an integrated VC trail protection mechanism called Wireless SNCP (Sub network Connection
Protection).
With Wireless SNCP, a backup VC trail can optionally be defined for each individual VC trail.
A path for the backup VC (typically separate from the path of the main VC that it is protecting).
For each direction of the backup VC, the following is performed independently:
At the first branching point, duplication of the traffic from the main VC to the backup VC.
At the second branching point, selection of traffic from either the main VC or the backup VC.
E1
IP-10
B
Main
Backup
VC
VC
IP-10
A
E1
IP-10
D
IP-10
B
IP-10 IP-10 E1 #2
E1 #2
C A E1 #1
IP-10
B
E1 #1
Wireless SNCP is supported over fiber links using IP-10 STM-1/OC-3 mux interfaces.
This feature provides a fully integrated solution for protected E1/T1 services over a mixed wireless-optical
network.
IP-10
Integrated XC
IP-10
IP-10 integrated
D
STM-1/OC-3 mux
STM1/OC3
fiber link
MW radio link
IP-10 IP-10
E1 #2
C A
IP-10
B
E1 #1
SNCP replaces a failed sub network connection with a standby sub network connection. In the FibeAir
product line, this capability is provided at the points where trails leave sub networks.
The switching criterion is based on SNCP/I. This protocol specifies that automatic switching is performed if
an AIS or LOP fault is detected in the working sub network connection. If neither AIS nor LOP faults are
detected, and the protection lockout is not in effect, the scheme used is 1+1 singled-ended.
The NMS provides Manual switch to protection and Protection lockout commands. A notification is sent to
the management station when an automatic switch occurs. The status of the selectors and the sub network
connections are displayed on the NMS screen.
Flexibility
- All traffic distribution patterns are supported (excels in hub traffic concentration)
Performance
- Switch to protection is done at the E1/T1 VC trail level, works perfectly with ACM (no need to
switch the entire traffic on a link)
Interoperability
- Interoperable with networks that use other types of protection (such as BLSR)
SNMP
Local remote channel, for configuration of a small set of parameters in the remote unit
In addition, the management system provides access to other network equipment through in-band or out-of-
band network management.
The XC node is managed in an integrated manner through centralized management channels. The main unit’s
CPU is the node’s central controller, and all management frames received from or sent to external
management applications must pass through it.
The node has a single IP management address, which is the address of the main unit (two addresses in case of
main unit protection).
To ease the reading and analysis of several IDU alarms and logs, the system time should be synchronized to
the main unit’s time.
As an additional resource, an extra data channel is included in the backplane LVDS infrastructure, through
which basic management data is sent by IDUs to the XC unit (and vice-versa).
IP addresses
In addition, an SDH management channel (management through the STM-1 interface) allows control from an
SDH network, without the need for additional Ethernet interfaces.
Centralized IP Access
- A single IP address must be configured and node is reached through it, two addresses if main
units are protected
Feature Configuration
- Some management is done through the main unit only: TDM XC, user registration, login, alarms
- Other features are configured individually in each extension unit: radio parameters, Ethernet
switch configuration
Ethernet XC Management
XC management connects main units to all extension units, and main units to each other. It also connects
the CPU to the Mezzanine.
In protection mode, management frames will arrive at a standby XC unit only through the protection
interface, coming from its mate.
For out of band, there is no wayside network. Access from remote sites is obtained through the wayside
channel. Access from the remote link to an extension unit requires an external switch.
Each pair of protected IDUs makes its own decisions regarding data and switching.
User and Ethernet traffic protection is implemented through Y cables or via the protection panel. TDM
traffic protection is implemented through dual LVDS interfaces on the backplane.
XC protection configurations include LVDS interface monitoring for AIS generation and SNCP support.
They also include an Ethernet line protection disabling option, whereby the user can configure Ethernet
interfaces for non-protection. In this setup, local failures will not affect all node traffic.
Signaling is performed between units in a shelf to indicate their active or standby status.
Protection Design
An IDU may exchange traffic with a protection pair (even if it itself is not protected).
Main units must know which pairs are protected, to send identical traffic to protected extension pairs.
All units must know from which LVDS interface to receive traffic.
Main Main
Active Standby
Activity: active/standby
In addition, main units inform extensions through separate hardware interfaces. This is required for
extension units to align with active LVDS, since the main units provide the LVDS clock. The signal
is encoded to prevent the system from being “stuck” due to faulty hardware.
If an XC switch occurs, downlink units will synchronize to the new clock within 50 msec.
Main units read the LVDS from both extension units to determine active/standby status. They also
receive traffic from the active unit.
Note: If a switch is detected, an idle window will open to prevent “switch cascades”.
All data is made available to the software, including alarms for protection mode mismatches and
errors, and interrupts upon protection switch.
y Integrated Ethernet switching can be enabled for multiple local Ethernet interfaces support
y Integrated Ethernet switching can be enabled for multiple local Ethernet interfaces support
y Local Ethernet & TDM interfaces protection support via Y-cables or protection-panel
y TDM traffic
y The 2nd ("Slave") IDU has all its Ethernet interfaces and functionality effectively disabled.
y TDM traffic
y E1/T1 services are duplicated over both radio carriers and are 1+1 HSB protected
Nodal Configurations
Chain with 1+0 Downlink and 1+1 HSB Uplink, with STM1/OC3 Mux
Chain with 1+1 Downlink and 1+1 HSB Uplink, with STM1/OC3 Mux
Native2 Ring with 3 x 1+1 HSB Links + STM-1 Mux Interface at Main Site
Native2 Ring with 4 x 1+0 MW Links and 1 x Fiber Link (5 hops total),
with STM1/OC3 Mux
General
6-18 GHz
23-38 GHz
Split-Mount installation 1
FibeAir RFU-C (6–38 GHz)
FibeAir RFU-P (11–38 GHz)
FibeAir RFU-SP (6–8 GHz)
FibeAir RFU-HS (6–8 GHz)
FibeAir RFU-HP (6–11 GHz)
All-Indoor installation FibeAir RFU-HP (6–11 GHz)
Coaxial cable RG-223 (100 m/300 ft), Belden 9914/RG-8 (300 m/1000
IDU to RFU connection
ft) or equivalent, N-type connectors (male)
Direct or remote mount using the same antenna type.
Antenna Connection
Remote mount: standard flexible waveguide (frequency dependent)
Note: For more details about the different RFUs refer to the RFU documentation.
1
Refer to RFU-C roll-out plan for availability of each frequency.
7 MHz (ETSI)
Minimum Ethernet
Radio Number of
Required Capacity
Profile Modulation Throughput Supported
Capacity (Mbps)
(Mbps) E1s
License Min Max
0 QPSK 10 10.5 4 9.5 13.5
1 8 PSK 25 15 6 14 20
2 16 QAM 25 20 8 19 28
3 32 QAM 25 25 10 24 34
4 64 QAM 25 29 12 28 40
5 128 QAM 50 33 13 32 46
6 256 QAM 50 38 16 38 54
7 256 QAM 50 43 18 42 60
Note: Ethernet Capacity depends on average packet size.
10 MHz (FCC)
Minimum Ethernet
Radio Number of
Required Capacity
Profile Modulation Throughput Supported
Capacity (Mbps)
(Mbps) T1s
License Min Max
0 QPSK 10 13 7 13 18
1 8 PSK 25 19 10 19 27
2 16 QAM 25 29 16 29 41
3 32 QAM 50 36 20 35 50
4 64 QAM 50 44 24 43 62
5 128 QAM 50 51 28 51 72
6 256 QAM 50 56 31 55 79
7 256 QAM 50 61 34 61 88
Note: Ethernet Capacity depends on average packet size.
20 MHz (FCC)
Minimum Ethernet
Radio Number of
Required Capacity
Profile Modulation Throughput Supported
Capacity (Mbps)
(Mbps) T1s
License Min Max
0 QPSK 25 28 15 27 39
1 8 PSK 50 41 23 41 59
2 16 QAM 50 58 32 57 82
3 32 QAM 100 74 41 74 105
4 64 QAM 100 87 49 87 125
5 128 QAM 100 101 57 101 145
6 256 QAM 100 114 65 115 164
7 256 QAM 150 125 71 126 180
Note: Ethernet Capacity depends on average packet size.
28 MHz (ETSI)
Minimum Ethernet
Radio Number of
Required Capacity
Profile Modulation Throughput Supported
Capacity (Mbps)
(Mbps) E1s
License Min Max
0 QPSK 50 41 17 40 58
1 8 PSK 50 55 23 54 78
2 16 QAM 100 78 33 78 111
3 32 QAM 100 105 44 105 151
4 64 QAM 150 130 55 131 188
5 128 QAM 150 158 68 160 229
6 256 QAM 150 176 75 178 255
7 256 QAM 200 186 75 188 268
Note: Ethernet Capacity depends on average packet size.
50 MHz (FCC)
Minimum Ethernet
Radio Number of
Required Capacity
Profile Modulation Throughput Supported
Capacity (Mbps)
(Mbps) T1s
License Min Max
0 QPSK 100 68 38 68 97
1 8 PSK 100 106 60 107 152
2 16 QAM 150 147 84 148 212
3 32 QAM 150 185 84 187 267
4 64 QAM 200 238 84 241 344
5 128 QAM 300 274 84 278 398
6 256 QAM 300 313 84 318 454
7 256 QAM "All capacity" 337 84 342 489
Note: Ethernet Capacity depends on average packet size.
Modulation 6-8 GHz 11-15 GHz 18-23 GHz 26-28 GHz 32-38 GHz
QPSK 26 24 22 21 18
8 PSK 26 24 22 21 18
16 QAM 25 23 21 20 17
32 QAM 24 22 20 19 16
64 QAM 24 22 20 19 16
128 QAM 24 22 20 19 16
256 QAM 22 20 18 17 14
11-15
Modulation 18 GHz 23-26 GHz 28-32 GHz 38 GHz
GHz
QPSK 23 23 22 21 20
8 PSK 23 23 22 21 20
16 QAM 23 21 20 20 19
32 QAM 23 21 20 20 19
64 QAM 22 20 20 19 18
128 QAM 22 20 20 19 18
256 QAM 212 19 19 18 17
RFU-HP RFU-HP
RFU-SP RFU-HS
Split-Mount All-Indoor
Modulation 6-8 GHz4 6-8 GHz 6-8 GHz 11 GHz 6-8 GHz 11 GHz
QPSK 24 30 30 27 33 30
8 PSK 24 30 30 27 33 30
16 QAM 24 30 30 27 33 30
32 QAM 24 30 30 26 33 29
64 QAM 24 29 29 26 32 29
128 QAM 24 29 29 26 32 29
256 QAM 22 27 27 24 30 27
1
Refer to RFU-C roll-out plan for availability of each frequency.
2
20dBm for 11GHz.
3
RFU-HP supports channels with up to 30MHz occupied bandwidth.
4
1dBm higher for 6L GHz.
1
Refer to RFU-C roll-out plan for availability of each frequency.
Frequency
Channel Occupied
Profile Modulation (GHz)
Spacing Bandwidth
11-18 23-28 31 32-38
0 QPSK -93.0 -92.5 -92.5 -91.5
1 8 PSK -89.5 -89.0 -89.0 -88.0
2 16 QAM -85.0 -84.5 -84.5 -83.5
3 32 QAM 10 MHz -81.5 -81.0 -81.0 -80.0
8.4 MHz
4 64 QAM (FCC) -79.5 -79.0 -79.0 -78.0
5 128 QAM -77.0 -76.5 -76.5 -75.5
6 256 QAM -75.0 -74.5 -74.5 -73.5
7 256 QAM -71.5 -71.0 -71.0 -70.0
0 QPSK -90.0 -89.5 -89.5 -88.5
1 8 PSK -86.5 -86.0 -86.0 -85.0
2 16 QAM -83.0 -82.5 -82.5 -81.5
3 32 QAM 14 MHz -81.5 -81.0 -81.0 -80.0
12.2 MHz
4 64 QAM (ETSI) -80.0 -79.5 -79.5 -78.5
5 128 QAM -77.0 -76.5 -76.5 -75.5
6 256 QAM -74.0 -73.5 -73.5 -72.5
7 256 QAM -71.5 -71.0 -71.0 -70.0
0 QPSK -89.5 -89.0 -89.0 -88.0
1 8 PSK -84.5 -84.0 -84.0 -83.0
2 16 QAM -82.0 -81.5 -81.5 -80.5
3 32 QAM 20 MHz
17.4 MHz -79.5 -79.0 -79.0 -78.0
(FCC)
4 64 QAM -77.0 -76.5 -76.5 -75.5
5 128 QAM -74.5 -74.0 -74.0 -73.0
6 256 QAM -71.5 -71.0 -71.0 -70.0
7 256 QAM -68.5 -68.0 -68.0 -67.0
0 QPSK -88.5 -88.0 -88.0 -87.0
1 8 PSK -85.5 -85.0 -85.0 -84.0
2 16 QAM -82.5 -82.0 -82.0 -81.0
3 32 QAM 28 MHz -78.5 -78.0 -78.0 -77.0
24.9 MHz
4 64 QAM (ETSI) -76.0 -75.5 -75.5 -74.5
5 128 QAM -71.5 -71.0 -71.0 -70.0
6 256 QAM -70.5 -70.0 -70.0 -69.0
7 256 QAM -66.5 -66.0 -66.0 -66.5
Note: RSL values are typical.
1
RFU-HP supports channels with up to 30 MHz occupied bandwidth.
Ethernet
E1/T1
Interface Type E1/T1
Number of Ports 16 x E1/T1 or 16 x E1/T1+16 x E1/T1 on T-Card
Connector Type MDR 69-pin
Framing Unframed (full transparency)
Coding E1: HDB3
T1: AMI/B8ZS (Configurable)
Line Impedance 120 ohm/100 ohm balanced. Optional 75 ohm unbalanced.
Compatible Standards ITU-T G.703, G.736, G.775, G.823, G.824, G.828, ITU-T I.432, ETSI ETS
300 147, ETS 300 417, ANSI T1.105, T1.102-1993, T1.231, Bellcore GR-
253-core, TR-NWT-000499
Auxiliary Channels
Wayside Channel 2 Mbps or 64 Kbps, Ethernet 10/100BaseT
Engineering Order Wire Audio channel (64 Kbps) G.711
User Channel Asynchronous V.11/RS-232 up 19.2 kbps
Latency over the radio link < 0.15 mSeconds @ 400 Mbps
"Baby jumbo" Frame Support Up to 1632Bytes
General Enhanced link state propagation
Enhanced MAC header compression
Integrated Carrier Ethernet Integrated non-blocking switch with 4K active VLANs
Switch MAC address learning with 8K MAC addresses
802.1ad provider bridges (QinQ)
802.3ad link aggregation
802.1ag Ethernet service OA&M (CFM)
Enhanced link state propagation
Enhanced MAC header compression
Full switch redundancy (hot stand-by)
QoS Advanced CoS classification and remarking
Advanced traffic policing/rate-limiting
Per interface CoS based packet queuing/buffering (8 CoS served by 4
queues)
Flexible scheduling schemes (SP/WRR/Hybrid)
Per interface traffic shaping
Ethernet Service OA&M 802.1ag CFM
Automatic "Link trace" processing for storing of last known working path
Performance Monitoring Per port Ethernet counters (RMON/RMON2)
Radio ACM statistics
Enhanced radio Ethernet statistics (Frame Error Rate, Throughput,
Capacity, Utilization)
Supported Ethernet/IP 802.3 – 10base-T
Standards 802.3u – 100base-T
802.3ab – 1000base-T
802.3z – 1000base-X
802.3ac – Ethernet VLANs
802.1Q – Virtual LAN (VLAN)
802.1p – Class of service
802.1ad – Provider bridges (QinQ)
802.3x – Flow control
802.3ad – Link aggregation
802.1ag – Ethernet service OA&M (CFM)
802.1w – RSTP
RFC 1349 – IPv4 TOS
RFC 2474 – IPv4 DSCP
RFC 2460 – IPv6 Traffic Classes
1
Note that the voltage at the BNC port on the RFUs is not accurate and should be used only as an aid
Standard compliance
Specification IDU RFU
EMC EN 301 489-4, Class B EN 301 489-4, Class B
Safety IEC 60950 IEC 60950
Ingress Protection IEC 60529 IP20 IEC 60529 IP56
Operation ETSI 300 019-1-3, ETSI 300 019-1-4,
Class 3.2 Class 4.1E/ Class 4M5[4]
Storage ETSI 300 019-1-1, Class 1.2
Transportation ETSI 300 019-1-2, Class 2.3
Environmental
Specification IDU RFU
Operating -5°C to +55°C -45°C to +55°C
Temperature (23°F to 131°F) (-49°F to 131°F)
Relative Humidity 0 to 95%, 0 to 100%
Non-condensing
Altitude 3,000m (10,000ft)
Power Consumption
Max power consumption 25W
IP-10 IDU (basic configuration)
Max system power consumption 1+0 with RFU-C 6-26 GHz: 47W
RFU-C + IP-10 1+0 with RFU-C 28-38 GHz: 51W
1+1 with RFU-C 6-26 GHz: 84W
1+1 with RFU-C 28-38 GHz: 88W
Max system power consumption 1+0: 65W
RFU-P + IP-10 1+1: 105W
Max system power consumption 1+0: 80W
RFU-SP + IP-10 1+1: 130W
Max system power consumption 1+0: 88W
RFU-HS + IP-10 1+1: 134W
Max system power consumption 1+0: 105W
RFU-HP + IP-10 1+1: 150W
Additional power consumption for 2.5W
16 E1/T1 T-card
Additional power consumption for 5W (including SFP)
STM1/OC3 Mux T-card