You are on page 1of 6

c 

 
 
 
    
       

 

    



  

Dune Networks SAND (Scalable Architecture for Networking Devices) chipset provides a
complete solution for switching fabric and ingress/egress traffic management.Ê3he two-device
chipset enables for the first time Internet and storage-platform vendors to build a full product line
using the same chipset, guaranteeing a life cycle of 7+ years.

In order to provide this extensive life cycle, the SAND chipset provides three scaling
dimensions: port bandwidth, port count, and port service scheme. A system built around the
SAND is capable of providing total user bandwidth starting from 10Gbpsand extending up to
803bps. 3he system allows the user to connect mixed rate line cards (10Gbps,20Gbps, 40Gbps
and higher) and line cards with various traffic management capabilities.

3he FE200 automatically detects and re-routes around faulty links or elements, which results in a
fault-tolerant self-healing fabric.3he system reacts to any fault in device, connection, and
configuration. A high-percentage capacity loss is absorbed without any performance impact,
while a loss of a higher percentage is handled gracefully.
Π
  



Œyper 3ransport 3echnology is a high-speed, low latency, point-to-point link designed to


increase the communication speed between integrated circuits in computers, servers, embedded
systems, and networking and telecommunications equipment up to 48 times faster than some
existing technologies.

Œyper3ransport 3echnology helps reduce the number of buses in a system, which can reduce
system bottlenecks and enable today's faster microprocessors to use system memory more
efficiently in high-end multiprocessor systems.

Œyper3ransport 3echnology is designed to:

VÊ ÿrovide significantly more bandwidth than current technologies


VÊ se low-latency responses and low pin counts
VÊ -aintain compatibility with legacy ÿ buses while being extensible to new SNA
(Systems Network Architecture) buses
VÊ Appear transparent to operating systems and offer little impact on peripheral drivers

Œyper3ransport 3echnology was invented at A-D with contributions from industry partners and
is managed and licensed by the Œyper3ransport 3echnology onsortium, a 3exas non-profit
corporation.

D !"#! $%&'%%%(D)*)


    
3he ID3 pre-processing switch for DSÿ clusters is optimized for wireless baseband processing
applications and supports the Serial RapidIO interconnect specifications. 3he pre-processing
switch solution integrates a suite of byte and packet-level manipulation capabilities designed to
offload bandwidth intensive tasks from digital signal processors. 3he result can be an
acceleration of each DSÿ in a cluster by up to 20%, enabling the DSÿs to focus on other
compute-intensive functions in wireless base stations, RN , and media gateway applications,
and providing a platform for scalable, flexible and cost-effective customer solutions.

+( 

VÊ Optimized as board-level switching solution, supporting tailored distribution of data to


multiple DSÿs

„Ê Offload simple data manipulation tasks in order to maximize efficiency of DSÿ


operations
„Ê Distribution of processed data via unicast, multicast, or broadcast operations
VÊ 40 SRIO links v1.3, configurable to support up to 22 SRIO 1x ports or 10 4x ports (or
multiple combinations); selectable port speeds up to 3.125 Gbps

„Ê Scalable to address wide range of wireless and non-wireless apps


„Ê Each port's speed, width, and reach can be independently configured

VÊ  y offloading the DSÿ/ Rÿ from 'low value' or 'duplicated' tasks, reduce their cycles by
20 percent or more.

VÊ Supported operations:

„Ê Aligning sample lengths


„Ê Re-sequencing samples within packets
„Ê Endian conversions (-S  <=> LS )
„Ê -ux or Demuxing packets
„Ê Summation of multiple packets
„Ê -ulticasting packets
„Ê ÿlacing samples into separate specific locations in the memory (D-A)
c',-
 
* 
   
 
*
      * 

 

,
ross-bar switch and Shared memory Switching Fabric Architecture-

1.Ê ross-bar switching fabrics use space-division multiplexing (SD-) to create a switching
medium with a high degree of parallelism, where as shared-memory switching fabrics use
time-division multiplexing (3D-).
2.Ê Latency can be low for crossbar switching, and it actually goes down as bandwidth goes
up.
3.Ê 3o be successful, crossbar switch fabrics must transport data derived from a range of
network types, including variable-length Iÿ packets, A3- cells, and 3D- byte streams.
In order to best manage the QoS, all these data types should be transported in optimally
sized fixed-length fabric cells. Although this implies a need for segmentation and
reassembly (SAR), in practice it is a small cost to pay for the degree of QoS management
it enables.
4.Ê Shared--emory architecture is developed when the capabilities of bus- and ring-based
switching architectures were exceeded, shared-memory switches contain a global
memory into or out of which each line card can write or read. 3his implementation fits
the output-queuing model because all packets queued in the switch are accessible to any
egress network processors as if they were in the processor's own local memory.
5.Ê Shared--emory switching fabric can efficiently support multi-cast.
6.Ê In case of Shared--emory switching fabric architecture, we can achieve high throughput
in the system, since there is no contention as long as we have enough memory.
7.Ê 3here is no need for transmission synchronization between ingress ports if the Shared-
-emory switching fabric is implemented.
8.Ê In case of ross-bar switching fabric there is separate memory for each input-output pair
is used, where as in Shared--emory architecture only single big size memory is required.
9.Ê  ecause of separate memories for each port in ross-bar architecture, at every time slot
we are writing and reading at most one cell from each memory.
10.ÊDepth of the memories cannot be so large in case of ross-bar switching architecture; this
results in Flow ontrol ÿroblems.

Ê
c., 
 
 
'*    /  
' 

,
Input queues will be required internally to avoid internal blocking due to internal link contention
among input to output connections.

For, N X N banyan switch ±

3here are log2N stages, and (N/2) log2N number of 2-input switching elements are required.

Now Number of inputs is 2N

So

For a 2N X 2N banyan switch ±

Number of stages required are = log2 (2N)

= Nlog22

=N

As we are using 2 input SEs, so required switching elements at stage 1 will be ½ of the total
inputs,

3hat is, Number of SEs required at stage 1 = N/2(input concentrators)

Each time number of SEs are halved, so the required number of SEs are ±

Number of SEs = (N/2) log2N

Ê Ê
Ê

You might also like