You are on page 1of 56

ABSTRACT

THE classic min-cost flow problem studies the mincost transmission of commodity flows across a flow network, where each unit edge capacity utilization incurs a cost, and the goal is to minimize the total edge cost while sustaining a target end-to-end flow rate. We consider in this paper the min-cost transmission of information flows in a data network and focus on multicast transmissions where common data is disseminated from a source to multiple destinations. Multicast models real-world applications such as media streaming or the dissemination of popular files. The cost of a communication link is an abstraction of real world cost, including delay latency, power consumption, and monetary charges. Conventionally, most network protocols assume that the network entities that participate in the network activities will always behave as instructed. However, in practice, most network entities will try to maximize their own benefits instead of altruistically contribute to the network by following the prescribed protocols, which is known as selfish. Thus, new protocols should be designed for the non-cooperative network, which is composed of selfish entities. In this paper, we specifically show how to design strategy proof multicast protocols for non-cooperative networks such that these selfish entities will follow the protocols out of their own interests. By assuming that a group of receivers is willing to pay to receive the multicast service, we specifically give a general framework to decide whether it is possible, and how if possible to transform an existing multicast protocol to a strategy proof multicast protocol. We then show how the payments to those relay entities are shared fairly among all receivers so that it encourages collaboration among receivers.

CHAPTER 1 INTRODUCTION

1.1 Introduction:
THE classic min-cost flow problem studies the mincost transmission of

commodity flows across a flow network, where each unit edge capacity utilization incurs a cost, and the goal is to minimize the total edge cost while sustaining a target end-to-end flow rate. We consider in this paper the min-cost transmission of information flows in a data network and focus on multicast transmissions where common data is disseminated from a source to multiple destinations. Multicast models real-world applications such as media streaming or the dissemination of popular files. The cost of a communication link is an abstraction of real world cost, including delay latency, power consumption, and monetary charges. While both commodity flows and information flows need to confine to the network topology and respect link capacity limits, information flows are unique in that they are replicable and encodable. Replication and encoding are in general necessary in achieving the full capacity of a data network, and such coding operations applied at potentially any node in the network are referred to as network coding. Traditional models of multicast are usually based on Steinertrees, in which either maximizing multicast rate or minimizing multicast cost is computationally intractable. Recent research in network coding reveals a dramatically different structure of multicast: in a directed network, a multicast rate d is feasible if and only if (iff) it is feasible as an independent unicast from the sender to each receiver .Based on the above result, efficient optimization algorithms for both multicast rate and multicast cost have been successfully designed in the cooperative environment, by exploiting the underlying network flow structure.

1.2 Motivation:
3

We consider in this paper a noncooperative environment where flows selfishly minimize their own cost by routing themselves through the cheapest available paths. Such selfish behavior is a well-studied phenomenon in game theory, and the stable state of such selfish routing games is characterized as a Nash Equilibrium, where no flow has incentive to deviate from its current route alone, assuming the rest of the flows stick to their routes. It is known that such Nash solutions can lead to very bad performances in general. Therefore, it is necessary to impose calculated economic regulations in the network such thatindividual decisions of selfish flows jointly lead to a socially desirable operation state of the network. Previous research has considered the enforcement of optimal ulticommodity flows . While they also employ economic measures to regulate the routing of flows among potential paths, it is unique and vital in the multicast problem to encourage flow cost sharing. It is well known that cost sharing can be achieved through combining individual unicast paths into multicast trees. However, two problems are evident along the traditional multicast tree direction. First, the cost may not be really minimum, since a different multicast flow with lower cost may be found by also exploiting the encodable property of information flows.Second, since optimal routing based on multicast trees is NP-hard to compute, it is unlikely that it would be exactly enforced by any efficiently computable regulation scheme in general network topologies . The best result prior to this work is a bicriteria analysis by Bhadra etal., which shows that there always exist cost sharing schemes to enforce a 2-approximate multicast flow.

1.3 Problem defination :

In this paper, we study the enforcement of optimal multicast flows in general network topologies, through proper edge cost sharing and other economic measures. We first consider the simple case where edges do not have capacity limits and show that the well-known Shapley value method cannot enforce optimal multicast flows. Instead, we formulate the min-cost multicast problem into a pair of primal and dual linear programs, based on the aforementioned multicast feasibility result with network coding due to Ahlswede et al. , and propose to allocate edge costs based on the shadow prices of flow merging constraints. We show that a Nash Equilibrium exists and that any optimal multicast flow has a corresponding cost allocation which makes it a Nash flow. The flow-cost pair at Nash Equilibrium also achieves a balanced budget, i.e., the total charges to flows exactly equal the total cost incurred at edges across the network.For the more realistic case where edges have finite capacity limits, we show that ignoring edge capacities and taking the solution above will not enforce the optimal multicast flow. We instead propose to further establish a nonnegative tax at each edge. We show that each optimal multicast flow can be strictly enforced by a pair of edge tax and cost allocation vectors such that the solution remains stable even if the capacity limits on a subset of edges are relaxed. In cases where charging edge taxes is infeasible or undesirable, we prove that there always exists a tax return procedure, after which flows pay true edge costs only and the optimal multicast flow is till Nash. However, the optimal solution is now weakly enforced and is sensitive to edge capacity fluctuations.

1.4 Objective of project :

We discuss connections of the edge taxes to the added value concept in the Vickrey-Clarke-Groves (VCG) edge payment scheme and to strategyproof multicast .The existential proofs mentioned above rely on pathbased linear programming models of the min-cost multicast problem. These LPs, while convenient for analysis purposes,are not suitable for computing the solutions in practice, since they in general contain exponentially many variables/constraints. We finally reformulate the min-cost multicast LPs to reduce their sizes and design an efficient primal-dual algorithm that simultaneously computes the edge taxes, the cost shares, and the optimal multicast flow they enforce. This is achieved by using Lagrange relaxation to remove coupling constraints among flows to distinct receivers and decomposing the entire ptimization into a series of shortest path computations through the subgradient method . Our algorithm outperforms general linear programming solution methods in runtime and allows distributed implementations. The rest of the paper is organized as follows: We review previous research in and discuss the network model together with preliminaries in linear programming. We study uncapacitated networks, capacitated networks, generalize the results, and design solution algorithms.

1.5 Limitations of project:


It consider explicitly . It It encouraging cost sharing is critical in enforcing min-cost multicast prove that shadow-price-based cost sharing may enforce optimal

and that traditional cost sharing fails to achieve this goal multicast flows. Prior to this work, cost shares exist only to enforce suboptimal multicast flows It also show that it is possible to return taxes to multicast flows to obtain a tax-free solution

1.6 Organisation Of Documentation :


6

We formulate the min-cost multicast problem into a pair of primal and dual linear programs, based on the aforementioned multicast feasibility result with network coding due to Ahlswede et al. and propose to allocate edge costs based on the shadow prices of flow merging constraints. We show that a Nash Equilibrium exists and that any optimal multicast flow has a corresponding cost allocation which makes it a Nash flow. The flow-cost pair at Nash Equilibrium also achieves a balanced budget, i.e., the total charges to flows exactly equal the total cost incurred at edges across the network. Previous research has considered the enforcement of optimal multi commodity flows. While they also employ economic measures to regulate the routing of flows among potential paths, it is unique and vital in the multicast problem to encourage flow cost sharing. It is well known that cost sharing can be achieved through combining individual unicast paths into multicast trees.

CHAPTER:2 LITERATURE SURVEY

2.1 Introduction:
8

Multicast models real-world applications such as media streaming or the dissemination of popular files.The cost of a communication link is an abstraction of real world cost, including delay latency, power consumption, and monetary charges. While both commodity flows and information flows need to confine to the network topology and respect link capacity limits, information flows are unique in that they are replicable and encodable. Replication and encoding are in general necessary in achieving the full capacity of a data network, and such coding operations applied at potentially any node in the network are referred to as network coding.

2.2 Existing System:


Two problems are evident along the traditional multicast tree direction. First, the cost may not be really minimum,since a different multicast flow with lower cost may be found by also exploiting the encodable property of information flows.Second, since optimal routing based on multicast trees is NP-hard to compute, it is unlikely that it would be exactly enforced by any efficiently computable regulation scheme in general network topologies . The best result prior to this work is a bicriteria analysis by Bhadra et al. which shows that there always exist cost sharing schemes to enforce a 2-approximate multicast flow.

2.3 Disadvantages of Existing system


It does not present efficient algorithms to compute the necessary cost It further complications of finite link capacity bounds. It consider implicitly the encodable property of information flows It encouraging cost sharing is critical in enforcing min-cost multicast

shares and taxes

and that traditional cost sharing fails to achieve this goal

It

prove that shadow-price-based cost sharing may enforce optimal

multicast flows. Prior to this work, cost shares exist only to enforce suboptimal multicast flows It also show that it is possible to return taxes to multicast flows to obtain a tax-free solution

2.4 Proposed System:


We consider explicitly the encodable property of information flows and adopt the conceptual flow structure of multicast routing accordingly.We show that encouraging cost sharing is critical in enforcing min-cost multicast and that traditional cost sharing fails to achieve this goal. We then prove that shadowprice-based cost sharing may enforce optimal multicast flows. Prior to this work, cost shares exist only to enforce suboptimal multicast flows. With further complications of finite link capacity bounds, we propose to enforce the optimal multicast flow with edge taxing and cost sharing combined. We also show that it is possible to return taxes to multicast flows to obtain a tax-free solution. Finally, we present efficient algorithms to compute the necessary cost shares and taxes.

2.5 Conclusion:
Replication and encoding are in general necessary in achieving the full capacity of a data network, and such coding operations applied at potentially any node in the network are referred to as network coding . Our algorithm outperforms general linear programming solution methods in runtime and allows distributed implementations. The rest of the paper is organized as follows: We review previous research in and discuss the network model together with preliminaries in linear programming. We study uncapacitated networks, capacitated networks, generalize the results, and design solution algorithms.

10

CHAPTER 3 ANALYSIS

11

3.1 INTRODUCTION
The seminal work of Ahlswede et al. initiated the research on network coding and showed its necessity inachieving maximum network capacity. An important result they proved is that in a directed network, a multicast rated is feasible iff it is feasible to each receiver independently. Koetter and Medard also derived this result in an algebraic framework . This new characterization of multicast ratefeasibility (with network coding) dramatically changed the underlying structure of the multicast problem, namely,from multicast trees to a union of conceptual network flows. Consequently, breakthroughs were made in efficient multicast algorithm design in directed networks ,undirected networks , and wireless ad hoc networks, assuming a cooperative environment. In this paper, we instead study how to achieve min-cost multicast when information flows are selfish.Edge pricing schemes that enforce minimum-delay multicommodity flows are studied in . Both compute taxes/tolls on edges to guide the selfish routing process. It computes the taxes by solving a nonlinear complementary problem, while takes the linear programming approach. Our work in this paper was partly inspired by them and is similar in that edge taxes are also considered as part of the flow regulation measures. The edge taxes we introduce allow more intuitive interpretations and can be eventually returned to the multicast flows without jeopardizing their stability.

3.2 Software requirement Specification


12

3.2.1Software Requirements OPERATING SYSTEM FRONT END BACK-END :::WINDOWS XP PROFESSIONAL. MICROSOFT VISUAL STUDIO 2008 SQL SERVER 2005.

3.2.2 Hardware Requirements: PROCESSOR RAM MOUSE Pentium IV 32 MB 1 MB Logitech

SECONDARY STORAGE -

3.3 Algorithms and flowchart


We have shown that shadow-price-based cost shares and taxes may enforce optimal multicast flows. We now proceedto discuss how these optimal solutions can be efficiently computed. First, note that the number of distinct paths between two nodes in a general graph may be exponential to the graph size. Consequently, the path-based LP has exponentially many variables (primal) or constraints (dual) and is impractical for computing purposes. In this section, we first present reformulated link-based min-cost multicast LPs with reduced polynomial sizes. We argue that they are equivalent to the pathbased LPs, and in particular, a cost allocation y_ or cost-tax vector pair (*y_; t*)

13

is an optimal solution to the link-based dual LP iff it is an optimal solution to the path-based dual LP. We then discuss how the classic approach of Lagrange relaxation followed by subgradient optimization can be applied to derive effective solution algorithms in both uncapacitated and capacitated networks. These algorithms are combinatorial in nature, consisting of mostly shortest path computations,and are therefore amenable to distributed implementations

3.4 Equivalence between Path-Based and Link-Based LPs


Again, we start with the uncapacitated case. Presented is the

reformulated min-cost multicast linear program based on edge-flow variables, along with its dual.Here, uv!denotes an edge from node u to node v, and N#uand N"u denote the downstream and upstream neighborset of u in G, respectively. We use uv! instead of e to represent an edge here because it is helpful to have explicit connections between nodes and their adjacent edges. We assume that there is a conceptual link from each Ti back to S with 14

unlimited

capacity,

for

succinct

expression

of

flow

conservation

constraints.The link-based LPs and the path-based LPs model the same mincost multicast problem and are equivalent to each other. In particular, a pair of primal and dual solutions f; y is feasible/optimal in the link-based formulation iff it is feasible/optimal in the path-based formulation. In the primal problem, both LPs establish end-to-end network flows of rate d from S to each Ti. Flow conservation of each conceptual flow is implicit in the pathbased formulation but explicit in the link-based formulation. In the dual program, both allocate edge cost w to yis and use xi to compute the shortest S ! Ti path under distance vector yi.The fact that xi is the shortest path length in any optimal dual solution is less explicit in the link-based LP, but can be deduced based on the following facts. Variables in p can be interpreted as the altitude of nodes. The firstdual constraint bounds the altitude difference of two

neighboring nodes in the network with yi, and the second dual constraint bounds xi with the altitude difference between S and Ti.

Conclusion:

15

The equivalence of the two different LP formulations in the capacitated case is similar. Since the link-based LPs have polynomial sizes, they aremore practical to solve using general linear programming solution methods such as the simplex algorithm. However, past experiences suggest that much better scalability can be achieved by exploiting the specific structure of the multicast problem, if possible. Our experiments in large-scale network topologies show that a well-designed subgraident algorithm may outperform both the simplex algorithm and the primal-dual interior-point algorithm by more than an order of magnitude, in terms of runtime. Furthermore ,general linear programming solution methods are inherently centralized, while distributed sub gradient algorithm design are often possible. We now proceed to describe tailored sub gradient algorithms for computing optimal cost sharing and taxes from the linkbased LPs.

16

CHAPTER 4 DESIGN

4.1 Introduction:

17

We now proceedto discuss how these optimal solutions can be efficiently computed. First, note that the number of distinct paths between two nodes in a general graph may be exponential to the graph size. Consequently, the pathbased LP has exponentially many variables (primal) or constraints (dual) and is impractical for computing purposes.In this section, we first present reformulated link-based min-cost multicast LPs with reduced polynomial sizes. we also applied similar techniques in computing the optimal orientation of an undirected multicast network. The goal in this paper is tocompute optimal shadow prices y_ and t_ instead of optimal primal solutions. This leads to subtle but important differences in dualization strategies and subgradient algorithmdesign. Since the algorithm design in the uncapacitatedcase is similar to but simpler than that in the capacitated case, we focus on the latter.

4.2 Module design:


4.2.1 Subgradient Algorithm Design:
Lagrange relaxation coupled with subgradient optimizationhas proven effective in designing tailored efficient algorithmsfor classic combinatorial problems (including thenetwork flow problem) with side constraints . The side constraints can be relaxed so that efficient combinatorialalgorithms can be applied to the remaining smaller problem. The price associated with the relaxation is that a series of, instead of one, smaller problems need to be solved.Lun et al.studied a similar problem of min-costmulticast in cooperative networks and presented a distributedoptimization algorithm based on Lagrange relaxation.In a previous work , we also applied similar techniques in computing the optimal orientation of an undirected multicast network. The goal in this paper is to compute optimal shadow prices y_ and t_ instead of optimal primal solutions. This leads to subtle but important differences in dualization strategies and subgradient algorithm design. Since the algorithm design in the uncapacitated case is similar to but simpler than that in thecapacitated case, we focus on the latter.We wish to have the sub gradient algorithm converge at both y_ and t_ simultaneously. In order to do so, we relax two groups of constraints

18

fi _ f _ c from the primal LP and introduce both y and t into the new objective function

4.3 Lagrange dual:


The Lagrange duality theorem assures that the above problem has the same optimal solutions as in the original LP. Due to the relaxation of the interflow couplingconstraints, the optimization of Ly; t is separable, i.e., given a pair of fixed dual vectors y; T.During a dual update, given a fixed multicast flow f, we compute new values for y and t with the aid of twoprescribed step-size sequences _y and _t and project them tothe positive orthant

19

CHAPTER 5 IMPLEMENTATION &RESULTS

5.1 Introduction:
20

The fundamental difference between the two models is that while the change of decision of one agent does not noticeably affect the global routing scheme in the former, it does so in the latter. In some literature, the stable system state is referred to as a War drop equilibrium in the former and a Nash equilibrium in the latter. It turns out that the shadow-price-based schemes are strong enough for controlling even selfish receivers, who can reroute a large amount of flow in one step of action. Note that the following fact is true about the LP-based routing and pricing scheme

5.2 Explanation of key functions:


5.2.1 INTRODUCTION TO DOT NET
Microsoft .NET, which I refer to as just .NET, is a platform for developing managed software. The word managed is key here-a concept setting the .NET platform apart from many other development environments. Ill explain what the word managed means and why it is an integral capability of the .NET platform. When referring to other development environments, as in the preceding paragraph, Im focusing on the traditional practice of compiling to an executable file that contains machine code and how that file is loaded and executed by the operating system. It shows what I mean about the traditional compilation-to-execution process.

Figure 1.1 Traditional compilation.


In the traditional compilation process, the executable file is binary and can be executed by the operating system immediately. However, in the managed environment of .NET, the file produced by the compiler (the C# compiler in our case) is not an executable binary. Instead, it is an assembly, shown in Figure 1.2, which contains metadata and intermediate language code.

21

Figure 1.2 Managed compilation.

5.2.2.NET STANDARDIZATION :
As mentioned in the preceding paragraph, an assembly contains intermediate language and metadata rather than binary code. This intermediate language is called Microsoft Intermediate Language (MSIL), which is commonly referred to as IL. IL is a high-level, component-based assembly language. In later sections of this chapter, you learn how IL supports a common type system and multiple languages in the same platform. .NET has been standardized by both the European Computer Manufacturers Association (ECMA) and the Open Standards Institute (OSI). The standard is referred to as the Common Language Infrastructure (CLI). Similarly, the standardized term for IL is Common Intermediate Language (CIL).In addition to .NET, there are other implementations of CIL-the two most well known by Microsoft and Novell. Microsofts implementation is an open source offering for the purposes of research and education called the Shared Source Common Language Infrastructure (SSCLI). The Novell offering is called Mono, which is also open source.The other part of an assembly is metadata, which is extra information about the code being used in the assembly. It shows the contents of an assembly.

22

It is a simplified version of an assembly, showing only those parts pertaining to the current discussion. Assemblies have other features that illustrate the difference between an assembly and an executable file. Specifically, the role of an assembly is to be a unit of deployment, execution, identity, and security in the managed environment. In Part X, Chapters 43 and 44 explain more about the role of the assembly in deployment, identity, and security. The fact that an assembly contains metadata and IL, instead of only binary code, has a significant advantage, allowing execution in a managed environment. The next section explains how the CLR usesthe features of an assembly to manage code during execution.

5.2.3 Common Language Runtime (CLR) :


23

As introduced in the preceding section, C# applications are compiled to IL, which is executed by the CLR. This section highlights several features of the CLR. Youll also see how the CLR manages your application during execution. In many traditional execution environments of the past, programmers needed to perform a lot of the low-level work (plumbing) that applications needed to support. For example, you had to build custom security systems, implement error handling, and manage memory. The degree to which these services were supported on different language platforms varied considerably. Visual Basic (VB) programmers had built-in memory management and an error-handling system, but they didnt always have easy access to all the features of COM+, which opened up more sophisticated security and transaction processing. C++ programmers have full access to COM+ and exception handling, but memory management is a totally manual process. In a later section, you learn about how .NET supports multiple languages, but knowing just a little about a couple of popular languages and a couple of the many challenges they must overcome can help you to understand why the CLR is such a benefit for a C# developer.The CLR solves many problems of the past by offering a feature-rich set of plumbing services that all languages can use. The features described in the next section further highlight the value of the CLR.

5.2.4 The CLR Execution Process Beyond just executing code, parts of the execution process directly affect your application design and how a program behaves at runtime. Many of these subjects are handled throughout this book, but this section highlights specific additional items you should know about.From the time you or another process selects a .NET application for execution, the CLR executes a special process to run your application.

24

Figure 1.4 The CLR execution process (summarized).

As illustrated in Figure 1.4, Windows (the OS) will be running at Start; the CLR wont begin execution until Windows starts it. When an application executes, OS inspects the file to see whether it has a special header to indicate that it is a .NET application. If not, Windows continues to run the application.If an application is for .NET, Windows starts up the CLR and passes the application to the CLR for execution. The CLR loads the executable assembly, finds the entry point, and begins its execution process.The executable assembly could reference other assemblies, such as dynamic link libraries (DLLs), so the CLR will load those. However, this is on an as-needed basis. An assembly wont be loaded until the CLR needs access to the assemblys code. Its possible that the code in some assemblies wont be executed, so there isnt a need to use resources unless absolutely necessary.As mentioned previously, the C# compiler produces IL as part of an assemblys output. To execute the code, the CLR must translate the IL to binary 25

code that the operating system understands. This is the responsibility of the JIT compiler.As its name implies, the JIT compiler only compiles code before the first time that it executes. After the IL is compiled to machine code by the JIT compiler, the CLR holds the compiled code in a working set. The next time that the code must execute, the CLR checks its working set and runs the code directly if it is already compiled. It is possible that the working set could be paged out of memory during program execution, for various reasons that are necessary for efficient operation of the CLR on the particular machine it is running on. If more memory is available than the size of the working set, the CLR can hold on to the code. Additionally, in the case of Web applications where scalability is an issue, the working set can be swapped out due to periodic recycling or heavier load on the server, resulting in additional load time for subsequent requests.The JIT compiler operates at the method level. If you arent familiar with the term method, it is essentially the same as a function or procedure in other languages. Therefore, when the CLR begins execution, the JIT compiler compiles the entry point (the Main method in C#). Each subsequent method is JIT compiled just before execution. If a method being JIT compiled contains calls to methods in another assembly, the CLR loads that assembly (if not already loaded).This process of checking the working set, JIT compilation, assembly loading, and execution continues until the program ends.The meaning to you in the CLR execution process is in the form of application design and understanding performance characteristics. In the case of assembly loading, you have some control over when certain code is loaded. For example, if you have code that is seldom used or necessary only in specialized cases, you could separate it into its own DLL, which will keep the CLR from loading it when not in use. Similarly, separating seldomly executed logic into a separate method ensures the code doesnt JIT until its called.Another detail you might be concerned with is application performance. As described earlier, code is loaded and JIT compiled. Another DLL adds load time, which may or may not make a difference to you, but it is certainly something to be aware of. By the way, after code has been JIT compiled, it executes as fast as any other binary code in memory.One of the CLR features listed in Table 1.1 is .NET Framework Class Library (FCL) support. The next section goes beyond FCL support for the CLR and gives an overview of what else the FCL includes.

5.2.5 The .NET Framework Class Library (FCL)


26

.NET has an extensive library, offering literally thousands of reusable types. Organized into namespaces, the FCL contains code supporting all the .NET technologies, such as Windows Forms, Windows Presentation Foundation, ASP.NET, ADO.NET, Windows Workflow, and Windows Communication Foundation. In addition, the FCL has numerous cross-language technologies, including file I/O, networking, text management, and diagnostics. As mentioned earlier, the FCL has CLR support in the areas of built-in types, exception handling, security, and threading. It shows some common FCL libraries. Types are used to define the meaning of variables in your code. They could be built-in types such as int, double, or string. You can also have custom types such as Customer, Employee, or Bank Account. Each type has optional data/behavior associated with it.Much of this book is dedicated to explaining the use of types, whether built-in, custom, or those belonging to the .NET FCL. Chapter 4, Understanding Reference Types and Value Types, includes a more in-depth discussion on how C# supports the .NET type system. T he namespaces in are a sampling from the many available in the .NET Framework. Theyre representative of the types they contain. For example, you can find Windows Presentation Foundation (WPF) libraries in the System. Windows namespace, Windows Communication Foundation (WCF) is in the System. Service Model namespace, and Language Integrated Query (LINQ) types can be found in the System. Linq namespace. Another aspect is that I included only two levels in the namespace hierarchy, System.*. In fact, there are multiple namespace levels, depending on which technology you view. For example, if you want to write code using the Windows Workflow (WF) runtime, you look in the System. Workflow. Runtime namespace. Generally, you can find the more common types at the higher namespace levels.One of the benefits you should remember about the FCL is the amount of code reuse it offers. As you read through this book, youll see many examples of how the FCL forms the basis for code you can write. For example, you learn how to create your own exception object in Chapter 13, Naming and Organizing Types with Namespaces, which requires that you use the Exception types from the FCL. Even if you encounter situations that dont require your use of FCL code, you can still use it. An example of when you would want to reuse FCL code is in Chapter 17, Parameterizing Type with Generics and Writing Iterators, where you learn how to use existing generic collection classes. The 27

FCL was built and intended for reuse, and you can often be much more productive by using FCL types rather than building your own from scratch. .NET is composed of a CLR and the .NET FCL, and supports multiple languages. The CLR offers several features that free you from the low-level plumbing work required in other environments. The FCL is a large library of code that supports additional technologies such as Windows Presentation Foundation, Windows Communication Foundation, Windows Workflow, ASP.NET, and many more. The FCL also contains much code that you can reuse in your own applications. Through its support of IL, a CTS, and a CLS, many languages target the .NET platform. Therefore, you can write a reusable library with C# code that can be consumed by code written in other programming languages. Remember that understanding the .NET platform, which includes CLR, FCL, and multiplelanguage support, has implications in the way you design and write your code. Throughout this book, youll encounter many instances where the concepts in this chapter lay the foundation of the tasks you need to accomplish. You might want to refer back to this chapter for an occasional refresher.This chapter has been purposefully as short as possible to cover only the platform issues most essential to building C# applications. If youre like me, youll be eager to jump into some code. The next chapter does that by introducing you to essential syntax of the C# programming language.

5.3 NET Framework Class Library


The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. The .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. We can use the .NET Framework to develop the following types of applications and services: Console applications. 28

Scripted or hosted applications. Windows GUI applications (Windows Forms). ASP.NET applications. XML Web services. Windows services.

For example, the Windows Forms classes are a comprehensive set of reusable types that vastly simplify Windows GUI development.

5.3.1ADO.NET
ADO.NET provides consistent access to data sources such as Microsoft SQL Server, as well as data sources exposed via OLE DB and XML. Datasharing consumer applications can use ADO.NET to connect to these data sources and retrieve, manipulate, and update data.

5.3.2 ADO.NET Architecture


Data processing has traditionally relied primarily on a connection-based, two-tier model. As data processing increasingly uses multi-tier architectures, programmers are switching to a disconnected approach to provide better scalability for their applications.

5.3.3 ADO.NET Components


The ADO.NET components have been designed to factor data access from data manipulation. There are two central components of ADO.NET that accomplish this: the Dataset, and the Adapter objects. Data provider, which is a set of components including the Connection, Command, Data Reader, and Data

5.3.4 ADO.NET Dataset

29

The ADO.NET DataSet is the core component of the disconnected architecture of ADO.NET. The Dataset is explicitly designed for data access independent of any data source. The Dataset contains a collection of one or more Data Table objects made up of rows and columns of data, as well as primary key, foreign key, constraint, and relation information about the data in the Data Table objects. In the Crystal Report Designer, you first select the data source that your report will reference. You can use more than one data source in a report. Crystal Reports can automatically link the tables, or you can specify how you want the tables linked. Database tables are linked so records from one database match related records from another

5.3.5 NET Data Providers


A .NET data provider is used for connecting to a database, executing commands, and retrieving results. Those results are either processed directly, or placed in an ADO.NET Dataset in order to be exposed to the user in an ad-hoc manner, combined with data from multiple sources, or remote between tiers. The .NET data provider is designed to be lightweight, creating a minimal layer between the data source and your code, increasing performance without sacrificing functionality. The following table outlines the four core objects that make up a .NET data provider. Object Connection Command Description Establishes a connection to a specific data source. Executes a command against a data source. Exposes Parameters and can execute within the scope of a Transaction from a Connection. Data Reader Data Adapter Reads a forward-only, read-only stream of data from a data source. Populates a DataSet and resolves updates with the data source. 30

The .NET Framework includes the SQL Server ,.NET Data Provider and the OLE DB .NET Data Provider.

5.4 C#.NET:
C# is a multi-paradigm programming language encompassing imperative, declarative, functional, generic, object-oriented (class-based), and componentoriented programming disciplines. It was developed by Microsoft within the .NET initiative and later approved as a standard by Ecma (ECMA-334) and ISO (ISO/IEC 23270) . C# is one of the programming languages designed for the Common Language Infrastructure.C# is intended to be a simple, modern, general-purpose, object-oriented programming language. Its development team is led by Anders Hejlsberg. The most recent version is C# 4.0, which was released on April 12, 2010.

5.4.1 Features :
By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). Most of its intrinsic types correspond to value-types implemented by the CLI framework. However, the language specification does not state the code generation requirements of the compiler: that is, it does not state that a C# compiler must target a Common Language Runtime, or generate Common Intermediate Language (CIL), or generate any other specific format. Theoretically, a C# compiler could generate machine code like traditional compilers of C++ or Fortran.

Some notable distinguishing features of C# are:

There are no global variables or functions. All methods and members must

be declared within classes. Static members of public classes can substitute for global variables and functions. 31

Local variables cannot shadow variables of the enclosing block, unlike C C# supports a strict Boolean datatype, bool. Statements that take conditions,

and C++. Variable shadowing is often considered confusing by C++ texts.

such as while and if, require an expression of a type that implements the true operator, such as the boolean type. While C++ also has a boolean type, it can be freely converted to and from integers, and expressions such as if(a) require only that
a

is convertible to bool, allowing a to be an int, or a pointer. C# disallows this

"integer meaning true or false" approach on the grounds that forcing programmers to use expressions that return exactly bool can prevent certain types of common programming mistakes in C or C++ such as if (a = b) (use of assignment = instead of equality ==).

In C#, memory address pointers can only be used within blocks specifically

marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value; it is impossible to obtain a reference to a "dead" object (one which has been garbage collected), or to a random block of memory. An unsafe pointer can point to an instance of a valuetype, array, string, or a block of memory allocated on a stack. Code that is not marked as unsafe can still store and manipulate pointers through the System.IntPtr type, but it cannot dereference them.

Managed memory cannot be explicitly freed; instead, it is automatically

garbage collected. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory which is no longer needed.

In addition to the try...catch construct to handle exceptions, C# has a construct to guarantee execution of the code in the finally block. Multiple inheritance is not supported, although a class can implement any

try...finally

number of interfaces. This was a design decision by the language's lead architect to avoid complication and simplify architectural requirements throughout CLI.

C# is more type safe than C++. The only implicit conversions by default are

those which are considered safe, such as widening of integers. This is enforced at compile-time, during JIT, and, in some cases, at runtime. There are no implicit conversions between booleans and integers, nor between enumeration members and integers (except for literal 0, which can be implicitly converted to any enumerated 32

type). Any user-defined conversion must be explicitly marked as explicit or implicit, unlike C++ copy constructors and conversion operators, which are both implicit by default. Starting with version 4.0, C# supports a "dynamic" data type that enforces type checking at runtime only.

Enumeration members are placed in their own scope. C# provides properties as syntactic sugar for a common pattern in which a

pair of methods, accessor (getter) and mutator (setter) encapsulate operations on a single attribute of a class.

Full type reflection and discovery is available. C# currently (as of version 4.0) has 77 reserved words. Checked exceptions are not present in C# (in contrast to Java). This has

been a conscious decision based on the issues of scalability and versionability.[21]

5.5 Common Type System (CTS):


C# has a unified type system. This unified type system is called Common Type System (CTS).A unified type system implies that all types, including primitives such as integers, are subclasses of the System.Object class. For example, every type inherits a ToString() method. For performance reasons, primitive types (and value types in general) are internally allocated on the stack.

5.5.1 Categories of datatypes


CTS separates datatypes into two categories: 1. 2. Value types Reference types Value types are plain aggregations of data. Instances of value types do not have referential identity nor a referential comparison semantics - equality and inequality comparisons for value types compare the actual data values within the instances, unless the corresponding operators are overloaded. Value types are derived from System.ValueType, always have a default value, and can always be created and copied. Some other limitations on value types are that they cannot derive from each other (but can implement interfaces) and cannot have an explicit default (parameterless) constructor. Examples of value types are some primitive types, such as int (a signed 32-bit integer), float (a 32-bit IEEE floating-point 33

number), char (a 16-bit Unicode code unit), and System. DateTime (identifies a specific point in time with nanosecond precision). Other examples are enum (enumerations) and struct (user defined structures).In contrast, reference types have the notion of referential identity - each instance of a reference type is inherently distinct from every other instance, even if the data within both instances is the same. This is reflected in default equality and inequality comparisons for reference types, which test for referential rather than structural equality, unless the corresponding operators are overloaded (such as the case for System.String). In general, it is not always possible to create an instance of a reference type, nor to copy an existing instance, or perform a value comparison on two existing instances, though specific reference types can provide such services by exposing a public constructor or implementing a corresponding interface (such as ICloneable or
IComparable).

Examples of reference types are object (the ultimate base class for (a base class for all C# arrays).

all other C# classes), System.String (a string of Unicode characters), and


System.Array

5.5.2 Crystal reports


Crystal Reports for Visual Studio .NET includes a number of Experts that help you create several kinds of reports, prepare graphs, set up the overall report format, link related databases, select records and groups for inclusion in the report, and gather the files you need to include when you distribute the report with step-by-step instructions.

5.6Database Connection 5.6.1 Report Objects


Some report objects that you can add to your report and format according to your needs include: Database fields Formula fields Parameter fields Group Name fields Running Total fields Summary fields 34

Charts Sub reports

5.7.Report Sections
The Crystal Report Designer is divided into report sections, such as section headers, footers, and details. For a new report, the Report Designer is divided into five report sections. You can choose to create additional sections, or to hide certain sections.

5.7.1Report Header An object placed in the Report Header section print once, at the beginning of the report, generally contains the report title and other information. 5.7.2 Page Header Objects placed in the Page Header section print at the beginning of each new page, generally contains information and field titles. 5.7.3 Details Objects placed in the Details section print with each new record, contains data for the body of the report. When the report is run, the Details section is reprinted for each record. 5.7.4Report Footer Objects placed in the Report Footer section print once at the end of the report, contain information such as grand totals. 5.7.5Page Footer Objects placed in the Page Footer section print at the bottom of each page, contains the page number and any other information.

5.7.6Additional Report Sections If a group, summary, or subtotal is added to the report, the program creates two additional sections: the Group Header and the Group Footer. 5.7.7Group Header Objects placed in the Group Header section print at the beginning of each new group. 35

5.7.8 Group Footer Objects placed in the Group Footer section print at the end of each group, generally holds the summary value, if any, and can be used to display charts or cross-tabs.

36

CHAPTER 6 TESTING AND VALIDATION

6.1 INTRODUCTION :
Testing performs a very critical role for quality assurance and for ensuring the reliability of software. The results of testing are used later on during maintenance also.Using the detailed design and the process specifications testing is done to uncover errors within the boundary of the 37

module. All modules must be successful in the unit test before the start of the integration testing begins. In this project each service can be thought of a module. There are so many modules like Login, HWAdmin, MasterAdmin, Normal User, and PManager. Giving different sets of inputs has tested each module. When developing the module as well as finishing the development so that each module works without any error. The inputs are validated when accepting from the user. In this application developer tests the programs up as system. Software units in a system are the modules and routines that are assembled and integrated to form a specific function. Unit testing is first done on modules, independent of one another to locate errors

6.2Testing
Testing is the process of detecting errors. Testing performs a very critical role for quality assurance and for ensuring the reliability of software. The results of testing are used later on during maintenance also.

6.2.1Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing that it has no errors. The basic purpose of testing phase is to detect the errors that may be present in the program. Hence one should not start testing with the intent of showing that a program works, but the intent should be to show that a program doesnt work. Testing is the process of executing a program with the intent of finding errors.

6.3Testing Objectives
The main objective of testing is to uncover a host of errors, Testing is a process of executing a program with the intent of finding an systematically and with minimum effort and time. Stating formally, we can say, error. 38

exists.

A successful test is one that uncovers an as yet undiscovered error. A good test case is one that has a high probability of finding error, if it The tests are inadequate to detect possibly present errors. The software more or less confirms to the quality and reliable standards.

6.4Levels of Testing
In order to uncover the errors present in different phases we have the concept of levels of testing. The basic levels of testing are as shown below

Client Needs

Accepta nce Testing

System Testing Require ments

D esi gn

Integration Testing

Unit Testing Code

6.4.1System Testing
The philosophy behind testing is to find errors. Test cases are devised with this in mind. A strategy employed for system testing is code testing.

6.4.2 Code Testing:

39

This strategy examines the logic of the program. To follow this method we developed some test data that resulted in executing every instruction in the program and module i.e. every path is tested. Systems are not designed as entire nor are they tested as single systems. To ensure that the coding is perfect two types of testing is performed or for that matter is performed or that matter is performed or for that matter is performed on all systems.

6.5 Types of Testing


Unit Testing Link Testing

6.5.1Unit Testing:
Unit testing focuses verification effort on the smallest unit of software i.e. the module. Using the detailed design and the process specifications testing is done to uncover errors within the boundary of the module. All modules must be successful in the unit test before the start of the integration testing begins. In this project each service can be thought of a module. There are so many modules like Login, HWAdmin, MasterAdmin, Normal User, and PManager. Giving different sets of inputs has tested each module. When developing the module as well as finishing the development so that each module works without any error. The inputs are validated when accepting from the user. In this application developer tests the programs up as system. Software units in a system are the modules and routines that are assembled and integrated to form a specific function. Unit testing is first done on modules, independent of one another to locate errors. This enables to detect errors. Through this error resulting from interaction between modules initially avoided.

6.5.2Link Testing:
Link testing does not test software but rather the integration of each module in system. The primary concern is the compatibility of each module.

40

The Programmer tests where modules are designed with different parameters, length, type etc.

6.5.3 Integration Testing:


After the unit testing we have to perform integration testing. The goal here is to see if modules can be integrated properly, the emphasis being on testing interfaces between modules. This testing activity can be considered as testing the design and hence the emphasis on testing module interactions. In this project integrating all the modules forms the main system. When integrating all the modules I have checked whether the integration effects working of any of the services by giving different combinations of inputs with which the two services run perfectly before Integration.

6.5.4 System Testing:


Here the entire software system is tested. The reference document for this process is the requirements document, and the goal os to see if software meets its requirements. Here entire ATM has been tested against requirements of project and it is checked whether all requirements of project have been satisfied or not.

6.5.5 Acceptance Testing:


Acceptance Test is performed with realistic data of the client to demonstrate that emphasized. In this project Network Management Of Database System I have collected some data and tested whether project is working correctly or not.Test 41 the software is working satisfactorily. Testing here is focused on external behavior of the system; the internal logic of program is not

cases should be selected so that the largest number of attributes of an equivalence class is exercised at once. The testing phase is an important part of software development. It is the process of finding errors and missing operations and also a complete verification to determine whether the objectives are met and the user requirements are satisfied.

6.6White Box Testing


This is a unit testing method where a unit will be taken at a time and tested thoroughly at a statement level to find the maximum possible errors. I tested step wise every piece of code, taking care that every statement in the code is executed at least once. The white box testing is also called Glass Box Testing.I have generated a list of test cases, sample data, which is used to check all possible combinations of execution paths through the code at every module level.

6.7 Black Box Testing


This testing method considers a module as a single unit and checks the unit at interface and communication with other modules rather getting into details at statement level. Here the module will be treated as a block box that will take some input and generate output. Output for a given set of input combinations are forwarded to other modules.

6.8 Criteria Satisfied by Test Cases


Test cases that reduced by a count that is greater than one, the number of additional test cases that much be designed to achieve reasonable testing. Test cases that tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

42

43

CHAPTER 7 SCREEN SHOTS

44

45

46

47

48

CONCLUSION

49

Conclusion:
We studied in this paper how to regulate selfish multicast flows to achieve minimum total cost. We consider explicitly the encodable property of information flows and adopt the conceptual flow structure of multicast routing accordingly.We show that encouraging cost sharing is critical in enforcing min-cost multicast and that traditional cost sharing fails to achieve this goal. We then prove that shadow-price-based cost sharing may enforce optimal multicast flows. Prior to this work, cost shares exist only to enforce suboptimal multicast flows. With further complications of finite link capacity bounds, we propose to enforce the optimal multicast flow with edge taxing and cost sharing combined. We also show that it is possible to return taxes to multicast flows o obtain a tax-free solution. Finally, we present efficient algorithms to compute the necessary cost shares and taxes. The seminal work of Ahlswede et al. [3] initiated the research on network coding and showed its necessity in achieving maximum network capacity. An important result they proved is that in a directed network, a multicast rate d is feasible iff it is feasible to each receiver independently. Koetter and Medard also derived this result in an algebraic framework . This new characterization of multicast rate feasibility (with network coding) dramatically changed the underlying structure of the multicast problem, namely,from multicast trees to a union of conceptual network flows. Consequently, breakthroughs were made in efficient multicast algorithm design in directed networks,undirected networks, and wireless ad hoc networks, assuming a cooperative environment. In this paper, we instead study how to achieve min-cost multicast when information flows are selfish Edge pricing schemes that enforce minimum-delay multicommodity flows are studied in . Both compute taxes/tolls on edges to guide the selfish routing process. It computes the taxes by solving a nonlinear complementary problem, while takes the linear programming approach. Our work in this paper was partly inspired by them and is similar in that edge taxes are also considered as part of the flow regulation measures. The edge taxes we introduce allow more intuitive interpretations and can be eventually returned to the 50

multicast flows without jeopardizing their stability. Another important difference is that we need to also design appropriate cost sharing schemes that induce the desired path sharing among multicast flows. Feigenbaum et al. studied the sharing of multicast cost in the context of designing strategyproof multicast for selfish receivers with private utility information . Along the multicast tree direction, they show that optimal welfare is NP-hard to approximate within any constant ratio in general networks. They consider acyclic network topologies instead, where the multicast cost is submodular, and both the Shapley value [16] and marginal cost sharing lead to strategyproofness. Lower bounds on communication overhead are derived for these two sharing schemes. In this paper, we perform multicast cost sharing with network coding explicitly considered. As a result, optimal sharing schemes can be computed efficiently in general network topologies .vector . Each edge flow has rate 0.5. The total cost is 0:5 _ 9 4:5 and is optimal. A minimum multicast tree has cost 5.Bhadra et al. also studied the multicast of selfish information flows. They use monomial cost functions to approximate edge capacity limits and differentiable relaxations to approximate the max function in edge flow computation. After these approximations, the min-cost multicast problem is modeled as a nonlinear optimization problem with a differentiable objective function and constraints, and cost shares for enforcing the optimal multicast flow are derived based on KarushKuhn-Tucker (KKT) optimality conditions for nonlinear programs. This result applies to power-law edge cost functions only. For the linear cost model and other convex cost functions, they employ a bicriteria approach introduced by Roughgarden and Tardos and design cost shares that enforce a suboptimal multicast flow, which has a cost lower than any optimal multicast flow achieving twice the throughput. In this paper, we use exact models for edge capacity limits and edge flow computation and devise economic mechanisms that enforce optimal multicast flows.Wang et al. studied min-cost multicast in networks with selfish edges, where the edge cost is private information, and each edge reports a cost value at its own choice to maximize its utility. They show that the celebrated VCG payment scheme fails to induce strategyproofness if optimal multicast routing, NP-hard without network coding, is approximated with schemes such as pruning min-spanning tree or link weighted Steiner tree. We establish underlying connections between edge taxes introduced in this paper and the added value concept in VCG payments. We 51

argue that paying each edge its declared cost plus such edge taxes according to VCG successfully induces strategyproofness at selfish edges. Therefore, the failure of VCG identified in is due to the fact that approximate multicast algorithms are employed rather than being inherent in the multicast problem itself. Wang et al.further presented a general framework for deciding whether an existing multicast protocol can be transformed into a truthful one and, if possible, how the payments to relay agents should be fairly shared among the receivers. In the routing and cost sharing of multicast toward a group of potential receivers, each with private utility information for being serviced in the multicast group, the key toward group strategyproofness is to have a crossmonotonic cost sharing scheme. Informally, within cross-monotonic cost sharing, a users payment can only be smaller when serviced in a larger set. Li [25] studied multicast schemes that target optimal flow routing, crossmonotonic cost sharing, and budget balance. It was shown that these three conditions cannot be satisfied simultaneously in general. Complementing positive and negative results are given for both directed and undirected networks on the achievable approximate budget balance ratio for an optimal and crossmonotonic multicast scheme. The above research is complementary to this work, in that the former studies incentives for strategic receivers to cooperate and the latter studies incentives for selfish traffic to exhibit asocially desirable behavior.

52

REFERENCES:
[1] R.K. Ahuja, T.L. Magnanti, and J.B. Orlin, Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993. [2] D.P. Bertsekas, Network Optimization: Continuous and Discrete Models. Athena Scientific, 1998. [3] R. Ahlswede, N. Cai, S.R. Li, and R.W. Yeung, Network Information Flow, IEEE Trans. Information Theory, vol. 46, no. 4, pp. 1204-1216, July 2000. [4] R. Koetter and M. Medard, An Algebraic Approach to Network Coding, IEEE/ACM Trans. Networking, vol. 11, no. 5, pp. 782795, Oct. 2003. LI AND WILLIAMSON: ENFORCING MINIMUM-COST MULTICAST ROUTING AGAINST SELFISH INFORMATION FLOWS 1307 [5] K. Jain, M. Mahdian, and M.R. Salavatipour, Packing Steiner Trees, Proc. 10th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), 2003. [6] Z. Li, B. Li, and L.C. Lau, On Achieving Optimal Multicast Throughput in Undirected Networks, IEEE Trans. Information Theory, vol. 52, no. 6, pp. 2467-2485, June 2006. [7] D.S. Lun, N. Ratnakar, R. Koetter, M. Medard, E. Ahmed, and H. Lee, Achieving Minimum-Cost Multicast: a Decentralized Approach Based on Network Coding, Proc. IEEE INFOCOM, 2005. [8] Z. Li and B. Li, Efficient and Distributed Computation of Maximum Multicast Rates, Proc. IEEE INFOCOM, 2005. 53

[9]

E.

Koutsoupias

and

C.

Papadimitriou,

Worst-Case

Equilibria, Lecture Notes in Computer Science, vol. 1563, pp. 404-413, 1999. [10] T. Roughgarden and E. Tardos, How Bad is Selfish Routing? J. ACM, vol. 49, no. 2, pp. 236-259, 2002. [11] H. Yang and H. Huang, The Multi-Class, Multi-Criteria Traffic Network Equilibrium and Systems Optimum Problem, Transportation Research, Part B, vol. 38, no. 1, pp. 1-15, Jan. 2004. [12] L. Fleischer, K. Jain, and M. Mahdian, Tolls for Heterogeneous Selfish Users in Multicommodity Networks and Generalized Congestion Games, Proc. 45th IEEE Symp. Foundations of Computer Science (FOCS), 2004. [13] G. Karakostas and S.G. Kolliopoulos, Edge Pricing of Multicommodity Networks for Heterogeneous Selfish Users, Proc. 45th IEEE Symp. Foundations of Computer Science (FOCS), 2004. [14] J. Feigenbaum, C. Papadimitriou, and S. Shenker, Sharing the Cost of Multicast Transmissions, J. Computer and System Sciences, vol. 63, pp. 21-41, 2001. [15] S. Bhadra, S. Shakkottai, and P. Gupta, Min-Cost Selfish Multicast with Network Coding, IEEE Trans. Information Theory, vol. 52, no. 11, pp. 5077-5087, Nov. 2006. [16] L.S. Shapley, A Value for n-Person Games, Contributions to the 54

Theory of Games. Princeton Univ. Press, pp. 31-40, 1953. [17] W. Wang, X. Li, and Y. Wang, Truthful Multicast in Selfish Wireless Networks, Proc. ACM MobiCom, 2004. [18] Z. Li, B. Li, D. Jiang, and L.C. Lau, On Achieving Optimal Throughput with Network Coding, Proc. IEEE INFOCOM, 2005. [19] Y. Wu, P.A. Chou, Q. Zhang, K. Jain, W. Zhu, and S.Y. Kung, Network Planning in Wireless Ad Hoc Networks: A Cross-Layer Approach, J. Selected Areas in Comm., vol. 23, no. 1, pp. 136150, Jan. 2005. [20] J. Yuan, Z. Li, W. Yu, and B. Li, A Cross-Layer Optimization Framework for Multihop Multicast in Wireless Mesh Networks, J. Selected Areas in Comm., special issue on multi-hop wireless mesh networks, 2006. [21] H.W. Kuhn and A.W. Tucker, Nonlinear Programming, Proc. Second Berkeley Symp. Math. Statistics and Probability, 1951. [22] W. Wang, X.-Y. Li, Y. Wang, and Z. Sun, Designing Multicast Protocols for Non-Cooperative Networks, IEEE J. Selected Areas in Comm., vol. 26, no. 7, pp. 1238-1249, Sept. 2008. [23] K. Jain and V.V. Vazirani, Applications of Approximation Algorithms to Cooperative Games, Proc. 33rd ACM Symp. Theory of Computing (STOC), 2001. [24] N. Immorlica, D. Karger, E. Nikolova, and R. Sami, FirstPrice Path Auctions, Proc. Sixth ACM Conf. Electronic Commerce (EC), 2005. [25] Z. Li, Cross-Monotonic Multicast, Proc. IEEE INFOCOM, 2008.

55

[26] S. Jaggi, P. Sanders, P.A. Chou, M. Effros, S. Egner, K. Jain, and L. Tolhuizen, Polynomial Time Algorithms for Multicast Network Code Construction, IEEE Trans. Information Theory, vol. 51, no. 6, pp. 1973-1982, June 2005. [27] G. Dantzig, Linear Programming and Extensions. Princeton Univ. Press, 1998. [28] M.J. Osborne and A. Rubinstein, A Course in Game Theory. MIT Press, 1994. [29] E. Anshelevich, A. Dasgupta, J. Kleinberg, E. Tardos, T. Wexler, and T. Roughgarden, The Price of Stability for Network Design with Fair Cost Allocation, Proc. 45th IEEE Symp. Foundations of Computer Science (FOCS), 2004. [30] C. Papadimitriou, Algorithms, Games, and the Internet, Proc. 33rd ACM Symp. the Theory of Computing (STOC), 2001. [31] E. Altman and L. Wynter, Euilibrium, Games, and Pricing in Transportation and Telecommunication Networks, Networks and Spatial Economics, vol. 4, no. 1, pp. 7-21, Mar. 2004. [32] H.D. Sherali and G. Choi, Recovery of Primal Solutions When Using Subgradient Optimization Methods to Solve Lagrangian Duals of Linear Programs, Operations Research Letters, vol. 19,1996.

56

You might also like