You are on page 1of 5

Modules:

Network module: Client-server computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server machine is a highperformance host that is running one or more server programs which share its resources with clients. A client also shares any of its resources; Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

Multicast flow module: Multicast addressing is a network technology for the delivery of information to a group of destinations simultaneously using the most efficient strategy to deliver the messages over each link of the network only once, creating copies only when the links to the multiple destinations split. The word "multicast" is typically used to refer to IP multicast which is often employed for streaming media and Internet television applications. In IP multicast the implementation of the multicast concept occurs at the IP routing level, where routers create optimal distribution

paths for datagrams sent to a multicast destination address spanning tree in realtime. At the Data Link Layer, multicast describes one-to-many distribution such as Ethernet multicast addressing, Asynchronous Transfer Mode (ATM) point-tomultipoint virtual circuits or Infiniband multicast.

Packet Scheduling

This packet scheduling policy is simple to implement, and yields good performance in the common case that node schedules are known, and information about node availability is accurate. A potential drawback is that a node crash (or other failure event) can lead to a number of wasted RTSs to the failed node. When added across channels, the number may exceed the limit of 7 retransmission attempts allowed for a single channel in the IEEE 802.11

Bandwidth Sharing

We compare the DRR algorithm for packet scheduling to a First-In-FirstOut (FIFO) scheduler where all the SDUs with the same next-hop are enqueued into the same buffer. For this purpose we simulate a network with an increasing number of nodes, from 2 to 10, arranged in a chain topology. Each node has one traffic flow directed to the chain end-point node, carried with a constant bit-rate stream of 1000 bytes packets emulating infinite bandwidth demands.

Route Update

After the network accepts a new flow or releases an existing connection, the local available bandwidth of each node will change, and thus the widest path from a source to a destination may be different. When the change of the local available bandwidth of a node is larger than a threshold (say 10 percent), the node will advertise the new information to its neighbors. After receiving the new bandwidth information, the available bandwidth of a path to a destination may be changed. Although the node is static, the network state information changes very often. Therefore, our routing protocol applies the route update mechanism.

Data Flow Diagram:


Client

Browse file to send

Efficient primal dual algorithm

Splitting the file

R1

R2

R3

Data send in different speed

Collection of all the splitted files

Server

Architecture Diagram:

File to be sent from client

Efficient primal dual Algorithm

Splitting file into paths

Send splitted files in Router

Router 1 Router 2 Router 3

Data send in differe nt speed

Collecti on of all the split files

Server receivi ng the collecte d file

You might also like