You are on page 1of 45

NIT KURUKSHETRA

WINDOWS DNA

SEMINAR
TOPIC :
Windows D - distributed N - interNet A - Applications Architecture

Submitted to: by: Mr. Manish Kumar Snehal M. Bhasakhetre 107487 IT- 4

Submitted

NIT KURUKSHETRA

WINDOWS DNA

CONTENTS
1. Introduction
1.1. What is windows DNA? 1.2. Windows DNA Distributed System 1.3.Windows DNA programming language

2. History
2.1. 2.2. 2.3. 2.4. Who owns DNA, who invented it? DNA for Windows : Revision History Historical stages The Tier Solutions The Holy Grail

3. Features
3.1. 3.2. 3.3. 3.4. 3.5. Autonomy Reliability Availability Scalability Interoperability

4. Architecture
4.1. 4.2. 4.3. 4.4. Logical Three-Tier Model COM (Component Object Model) Physical Three Tier Model Stateful vs. Stateless Components

5. ADO.NET and Windows DNA 6. Nature And Needs


6.1. Internet Application Design 6.1.1. Internet Application Requirements 6.1.2. Platform Capability Requirements 6.1.3. Application Design Goals 6.1.4. Other Goals Network Application Characteristics 6.2.1. Communications 6.2.2. Concurrent Usage 6.2.3. State Management 6.2.4. Latency 6.2.5. Rigorous Encapsulation

6.2.

7. Using Windows DNA 8. Internet Information Server & ASP


2

NIT KURUKSHETRA

WINDOWS DNA

9. Business Layer Technology 10. Data Access With ADO And OLEDB 11. Latest Release 12. Applications 13. Future Prospects
13.1. W DNA (yet to be released) 13.2. ExcelDna

14. Conclusion References

ABSTRACT
3

NIT KURUKSHETRA

WINDOWS DNA

Windows dna is Windows Distributed interNet Applications Architecture, a marketing name for a collection of Microsoft technologies that enable the Windows platform and the Internet to work together.Some of the principle technologies comprising DNA include ActiveX, Dynamic HTML (DHTML) and COM. It is a way of designing applications with growth, deployment, and load taken as major considerations. Microsoft Windows DNA is based on a distributed system architecture. Windows DNA (Distributed InterNet Applications) is a framework that describes how to develop multi-tier, high performance, scalable distributed applications over the network. The heart of DNA is an integrated programming model based on COM+. ADO.NET components are applicable in all layers of Windows DNA through XML schemas and data sets. Version 2.2.0 (March 1999) was the first public release. Version 2.5.1 (August 2004) is the newly developed regstered version. This version is fully Windows XP / Windows 2000 compatible. . This history can roughly be broken into three stages: Monolithic applications (single encapsulated executables),Client/server applications (data driven models),Collaborative Systems (Three tier and 'n' tier application). The Microsoft implementation of the collaborative approach was dubbed DNA.Various technologies used in this area is Lightweight Presentation, Rendering Engine, Fast Implementation (between Rendering and Business Components), Component Communication, Business Components , Fast Implementation (Business Layer to Data Layer) , Data Access and Translation, Data Storage .Windows DNA has five major objectives which are autonomy,reliability,scalability,interopearability,availability. Windows DNA is a set of services, tools, and three layers of application layers - presentation layer, business logic layer, and database layer. The Windows DNA architecture provides two models for designing and building large, complex distributed systems: the logical three-tier model and the physical three-tier model. The logical three-tier model is used to define and design the components of the system. The physical three-tier model is used to specify the placement of these components within the distributed system. ADO.NET components are applicable in all layers of Windows DNA. The Presentation layer of the .NET framework is much more powerful and flexible than the Presentation layer of Windows DNA. Windows Forms, XML, Web Forms, and ASP.NET provides tools and services to write much more robust, fast, better design and much improved than the versions used in Windows DNA.Goals of the windows dna to aceived to Faster time to market, Low cost, Interoperability, Adaptability. Some characteristics follow from that:Communications,Concurrent usage,State management,Latency,Rigorous encapsulation. Windows must be installed as "Microsoft or 100% Compatible" when prompted for the network configuration. Running Microsoft Windows operating system version 3.1 under the DNA Networks software may produce some problems unless you have updated to version 3.38 (Rev. H) of DNA Networks proprietary boards or to version 5.02 (Rev. H) of NetBIOS boards. The Business layer (or middleware layer) incorporates the logic for encapsulating the business process in windows DNA. The primary data source access is provided via ActiveX Data Objects (an agreed variant of a COM object, one that supports particular implementation details. ADO abstracts the details of the implementation of data access and constructs a consistent data model. Windows 2000 is the latest verion of windows DNA. Windows Distributed interNet Architecture (Windows DNA) 2000, a comprehensive, integrated platform for building and
4

NIT KURUKSHETRA

WINDOWS DNA

operating state-of-the-art distributed Web applications as well as the next wave of Internetbased Web services.The Windows DNA 2000 family of solutions includes the following : Microsoft Windows 2000, Microsoft Commerce Server 4.0, Microsoft BizTalk Server, Microsoft "Babylon" Integration Server, Microsoft AppCenter, Microsoft SQL Server "Shiloh.", Microsoft Visual Studio. W-DNA is a Workflow Repository Server for Windows Workflow Foundation which is not yet released but can be the future of windows DNA. ExcelDna is an open-source project to integrate .Net into Excel which can also be used in future as the future use. Windows DNA offers a solid foundation for N-tier applications, and Windows developers can build on what they know from the DNA world, applying much of it to the new .NET environment.Windows DNA is not in much use now a days but if given a chance ,in future it can be a better option to deal with problems of other operating systems.

1.INTRODUCTION
Windows dna is Windows Distributed interNet Applications Architecture, a marketing name for a collection of Microsoft technologies that enable the Windows platform and the Internet to work together. Some of the principle technologies comprising DNA include ActiveX, Dynamic HTML (DHTML) and COM.
5

NIT KURUKSHETRA

WINDOWS DNA

ActiveX is a loosely defined set of technologies developed by Microsoft for sharing information among different applications. ActiveX is an outgrowth of two other Microsoft technologies called OLE (Object Linking and Embedding) and COM (Component Object Model). ActiveX applies to a whole set of COM-based technologies. Dynamic HTML refers to new HTML extensions that will enable a Web page to react to user input without sending requests to the Web server. There are many technologies for producing dynamic HTML, including CGI scripts, Server-Side Includes (SSI), cookies, Java, JavaScript, and ActiveX. Component Object Model (COM) is a binary-interface standard for software componentry introduced by Microsoft in 1993. It is used to enable interprocess communication and dynamic object creation in a large range of programming languages.
1.1 .

What is windows DNA?

DNA (Distributed interNet Architecture) is an abstract methodology. It is a way of designing applications with growth, deployment, and load taken as major considerations. More specifically, DNA is a software application engineering design pattern. It is a solution to a set of common problems that are described in a generic, abstracted enough manner as to enable the application of the pattern in a wide variety of situations. In other words, DNA is a metaphor that can help to understand and solve complex problems in simple terms. It is a very tangible set of technologies. Specifically it is a set of (traditionally) Microsoft technologies that together help developers implement the DNA architecture within their applications. From a web developer's perspective, it is through ASP that the set of technologies implementing a DNA solution are glued together and distributed over the web.

1.2. Windows DNA Distributed System


Microsoft Windows DNA is based on a distributed system architecture. Distributed systems contain components on more than one computer. An Internet-based system is a typical example of a distributed system, as it contains an application running on a client computer (usually a browser-based application) and one or more applications running on a Web server. One of the primary goals of distributed systems is to be highly scalable. A highly scalable system can easily expand from thousands of users to tens of thousands of users. Over the last two decades, we've witnessed a steady evolution from Windows systems that can handle a few hundred users, such as a department-size application, to systems that can handle tens of thousands of users, such as an Internet application.Windows DNA is highly scalable. Because distributed
systems contain a variety of components located on several computers, these systems are usually extremely complex. Without a framework such as the one provided by Windows DNA, designing and building such a system would be impossible. The Windows DNA architecture provides two models for designing and building large, complex distributed systems: the logical three-tier model and the physical three-tier model. The logical three-tier model is used to define and design the components of the system. The physical three-tier model is used to specify the placement of these components within the distributed system.

1.3. Windows DNA programming language


Windows DNA (Distributed InterNet Applications) is a framework that describes how to develop multi-tier,

NIT KURUKSHETRA

WINDOWS DNA

high performance, scalable distributed applications over the network. The heart of DNA is an integrated programming model based on COM+. ADO.NET components are applicable in all layers of Windows DNA through XML schemas and data sets.

2. HISTORY
2.1. Who owns DNA, who invented it?
Although DNA is associated quite closely with Microsoft, no one actually 'owns' the architecture. If you use it, there are no payments, no penalties, and definitely no right or wrong ways to implement it. It's a high level idea; there are no coding practices, special notations or even restrictions on the technologies to use.

NIT KURUKSHETRA

WINDOWS DNA

The DNA approach to building applications is a pseudo technical-cum-marketing idea generated from Microsoft that is both application development architecture and medium for selling application developers more products. It is an approach that delivers in stages, often expensive stages, The reality of the model is that it fits neatly into a range of Microsoft technologies, which is in part, by design. Thus, it is common that if you adopt the architectural model, you'll also adopt quite a few non-free Microsoft products in the process. That is because the Microsoft products all work in support of the architecture.

2.2. DNA for Windows : Revision History


Version 2.2.0 (March 1999) First public release. Version 2.2.1 (October 1999) Bug causing multiple alignments to sometimes fail fixed. Other minor improvements. Version 2.4.0 (November 2000) REGISTERED VERSION Bug fix: Bug causing contig assembly to crash program on some systems fixed Bug fix: Restriction digest and translation printouts sometimes omitted data between pages. This has been fixed. New feature: Added support for abi trace files containing Mac resource fork data. New feature: Opens, edits and saves Staden chromatogram (scf) files. New feature: Trace file sequence data can be trimmed to include only the sequence you wish. New feature: Quick location of ambiguous bases in trace file. New feature: Chromatogram files as well as sequence files can be opened from the command line. New feature: Multiple sequence files can be opened at once. New feature: If sequences have been altered and not saved, program gives warning before closing. Version 2.5.1 (August 2004) REGISTERED VERSION New feature: Much improved trace file viewer New feature: Support for different translation tables New feature: More accurate protein mass calculation New feature: Fully Windows XP / Windows 2000 compatible (i.e. contig assembly and multiple alignments don't now crash under these systems) Numerous bug fixes!

2.3 Historical stages


If we were to review the history of application development on the Personal Computer it would encompass a morass of information. But if we were to focus on information related only to the nature of application development, we can begin to see an evolution that stretches from the early days of software engineering into our near future. This history can roughly be broken into three stages:
8

NIT KURUKSHETRA

WINDOWS DNA

1. Monolithic applications (single encapsulated executables) 2. Client/server applications (data driven models) 3. Collaborative Systems (Three tier and 'n' tier application)

2.3.1. Monolithic, single scale applications The era of the monolithic PC applications continued and enhanced the rich heritage inherited from the age of mainframes. The transformation from mainframes to personal computers represented a fundamental shift that moved computer processing power from the hands of the few (mainframe operators) to the hands of the many (anyone with a desktop box). Along with this transformation came the natural exuberance and freedom of being able to create, sell and share programming solutions to hitherto unknown problems. Thousands of software packages were released for personal computers in the early days. However, though these early applications were exciting and perhaps more powerful than anything that had come before, the lack of any collaborative systems meant that most applications were built and designed for single users. There were no real email systems, no multi-user databases, and most prominently, documents were stored locally, or on floppy disks. From an architectural perspective, such applications were fairly primitive. The reason they were primitive was because they integrated all three application layers into one maintenanceheavy, hard-to-share, and unscalable executable. 2.3.2. Client Server Model Applications As technology advanced, connecting machines and sharing data became an important goal and a pressing reality for application developers. Simple networks formed and new applications and application architectures arose. Since networking and resource sharing introduced larger and more complex problems into the development environment, and because the inherent flaws in monolithic applications were becoming clear, a new approach that captured the nature of these new applications was devised. And because the "applications" had grown, so had the stages of an application from an abstract viewpoint. Client-Server applications became all the rage. And the monolithic applications started to fade into the past like some forgotten dinosaurs. In the client-server model, applications were broken apart, distributing processing between client computers and server computers. As client-server applications became feasible, so too did the layering of the technology become more important. In the client server model, the three layers of an application could more easily
9

NIT KURUKSHETRA

WINDOWS DNA

be isolated. In fact, such isolation became even more crucial as scalability, distribution and mainenance became even more complex. Another factor in the separation of the layers came from the data. As sharing data became essential to faster and wider information distribution, the network systems drove applications to evolve into data sharers. Rather than store data locally, in a client-server application, data would be stored in a central repository where it could be accessed by multiple clients who wished to "share" it. The benefit of this architecture was that it gave access to large numbers of users so that they could store and retrieve important data in a consistent and stable manner generally from a "fully loaded" application on the client machine. Order processing, accounts, internal systems; email and database applications became the norm in the client-server era. The traditional client server applications enabled, and encouraged, developers to build feature rich solutions that integrated key technologies in a single point of access. Typically a developer would focus on delivery of the graphical user interface and storing data in repositories that enabled users to share data. Technologies like ODBC (Open Database Connectivity), Visual Basic, Visual C and MFC (Microsoft Foundation Classes) helped developers build applications in short timescales that could access and share data. 2.3.3. Problems with the Client Server Model However, though the client-server architecture was vastly superior to the monolithic architecture, the approached forced developers to locate the three main development layers in just two locations (in the client or in the server). Typically, presentation and business rules logic would always remain within the purview of the client-side application. For example, Decisions on locking, updating, deleting and information presentation were encoded in the client application directly. This enabled the client application to 'short-cut' the UI to produce clever designs. The data however, became the responsibility of the database or messaging subsystems. As such they were accessed either via direct API or through thin layers of 'enabling technology' such as ODBC. In some instances, this provided limited flexibility enabling designers to substitute like database for like without creating large amounts of work. The presentation layer would exist as a single layer within the client side application but the business and database layers could exist in the database or messaging server. This enabled the client side application to respond quickly to cosmetic changes without having to test all business logic cases, and at the same time allowed the business rules to change without requiring mass rollouts at every such occasion. The client-server model presented a major paradigm shift when compared to monolithic applications. However, it remained far from ideal.
10

NIT KURUKSHETRA

WINDOWS DNA

In one scenario, the presentation layer and business layer are constantly diametrically opposed. Fewer updates to the GUI engender familiarity with an application (and makes it simple in deployment) but restrict the ability to change the business rules. Frequent changes to the business rules cause deployment and testing headaches. The alternative approach is to locate the business rules within the database/messaging layers. However this tactic means that it is hard to uncouple the business logic from the data and messaging structures. The Client-Server architectures also suffered under constant maintenance strains due to proprietary standards, technologies and, most importantly, a lack of scalability. Client/Server applications performed best in a corporate environment where groups of users require a focal application which integrates tightly to key technologies such as database engines, email server, work flow and document management systems. As a result, most Client-Server application were little more that file and database sharing applications. In fact, early Client/Server solutions were often oblivious to other clients, behaving in a singular fashion. While the Client-Server approach does much to create good solutions there followed a painful growth period during which applications were built, deployed and maintained. It was during this "growth" that the issues with Client/Server development became apparent. In the client server model all clients have access to nearly all the services. This forces the services, like email and the database servers, to ensure that they can maintain large numbers of permanently connected users. Every connection requires processing, memory resources and extra network traffic - even when the clients are busy doing nothing of significance against the servers. Load balancing tools were scarce and the underlying technology proved problematic to maintain. From a development perspective there are also issues of maintaining software that often needs updating when new versions of API arrive, especially when the new version fixes existing issues. When this situation occurs the process of deployment becomes a major issue. How best do you deploy to hundreds (maybe thousands) of clients? How can the developers ensure that there are no conflicts with other installations? How do you deploy all the supporting APIs and connectivity libraries? Certainly, there are solutions to most of these problems, however, one specific problem remains a constant problem for all multi-client solutions; scalability. As users are added to a solution it becomes more and more difficult to maintain a consistently good level of service without diverting significant resources. Technology moves and grows but occasionally a conceptual shift occurs (a paradigm shift). A shift of this magnitude arrived in the form of the Internet. With the arrival of the 'net' many of the client-server problems could be solved. If not solved, they were meliorated.

11

NIT KURUKSHETRA

WINDOWS DNA

2.3.4. Collaborative Business Systems In a collaborative business systems application developers could much more closely approach the three application layers. Additionally each layer could be supplanted with 'proxy' layers that provide a cushion for processing, migration and scaling issues. Such an example would be a generic data source access wrapper to enable the database to be substituted without large and costly redevelopment. With the popularization of the internet/intranet/extranet solution a gamut of problems could be addressed in a consistent manner, while at the same time enabling the holy grail of "layer separation" to move closer. In breaking the approach into layers it becomes possible to focus on solving domain specific issues. At the same time, design flexibility could be introduced. Here's why, * Using a browser to access services assures a familiar lightweight interface. * The browser and HTML provide a standard implementation and deployment approach. Applications aren't really deployed; more borrowed. * No client-side issues (ignore cross browser issues for now). * Highly tailored solutions. Users are not required to "USE" everything. * Applications can be seamlessly divided across logical and physical locations. * Disconnected or Just-In-Time (JIT) operation. * Protection for both resources and clients. Fewer concurrent accesses, less resource usage. * Platform independent...nearly. For example, a simple global telephone directory becomes an application that can be accessed from virtually any computer - the architecture doesn't place limits on the type of hardware, operating system, or whatever.

2.4. The Tier Solutions The Holy Grail


The Microsoft implementation of the collaborative approach was dubbed DNA and was realized using the following set of technologies: * A thin presentation client that understands standard information sent to it. (Web Browser) * Encapsulation of the business logic in a manner that remains flexible. (COM) * Data stored in a consistent and hopefully interchangeable solution. (XML) We can see how the model and the technologies could now work together. First, a lightweight front-end could perform the presentation and client interaction in a familiar and robust environment. The HTML browser solves this. Information is displayed and submitted in a standardized fashion and governed by an independent and open community. Secondly, business rules could be captured in a middle tier where they could change but remain scalable. To capture logic, one must build an encapsulated function that can be accessed via the presentation layer and talk to the data layer. COM, which will discuss in greater detail in just a bit, was designed with such ideas in mind; code re-use, maintenance and encapsulation.
12

NIT KURUKSHETRA

WINDOWS DNA

Finally, we could store the data within it's own layer to protect the other layers from proprietary databases and Meta stores. While this may initially seem like a utopian list of features, the technology had provided the solutions. A number of technologies are needed to make the collaborative or "tiered" approach work. In fact, DNA, and the DNA technologies hoped to provide an 'n' tier solution (this will become apparent soon).

Technologies Needed Lightweight Presentation

Microsoft Products

Other Products

Internet Explorer, HTML, XML Internet Information Server Rendering Engine (Web Engine) Fast Implementation (between Active Server Pages, Rendering and Business Scripting (VBScript, Jscript Components) and even PerlScript) (COM) Component Communication COM (Common Object Model), MSMQ, COM+ COM Objects (with/without Business Components MTS support) Fast Implementation ActiveX Data Objects (More (Business Layer to Data COM), ODBC Layer) Data Access and Translation OLEDB, Universal Data Access and ADSI Data Storage

Others include Netscape For large solutions see other variants of IIS by MS. Enables IIS to produce flexible HTML driven by Business Components. Competitors include Corba and RMI, PerlRPC Using lightweight wrappers to access a generic data abstract layer. ODBC

Obviously other databases are SQL Server, Exchange, included like Oracle and Active Directory and NTFS Informix.

There are many other technologies used in other areas, for example, that include access to legacy systems, third party integrations, and even exchangeable data format. Hchnowever, we are primarily focused on what DNA is and where ASP sits. By now, you will have realized that COM, the Component Object Model, plays a large part in the Microsoft implementation of the multi-tiered solutions. This is no co-incidence, as COM was designed to solve the problems of communication, implementation and open access within the layers. The COM approach centers on a language
13

NIT KURUKSHETRA

WINDOWS DNA

independent framework to build components and services. It's obviously successful - if we look at our table we can see that a large number of the technologies either use or are built from COM. Also note, that nearly all of the technologies presented here for DNA are mainly based on the Windows NT technology.

3. DESIGN OBJECTIVES
Windows DNA had five major design objectives. These are common themes that run through the architecture, guiding the design decisions at each step. Without these, the architecture would be incoherent, and would not address the challenges of network applications. These design objectives are: * Autonomy * Reliability * Availability * Scalability * Interoperability

3.1. Autonomy
Autonomy is the extension of encapsulation to include control of critical resources. When a program uses encapsulation, as with object oriented programming, each object protects the integrity of its data against intentional or accidental corruption by some other module. Unfortunately, client-server computing violates encapsulation when it comes to resources. A server, no matter how well written and robust, can only support a finite number of connections. System memory, threads, and other operating system objects limit it. Database programmers see this expressed in the number of concurrent connections an RDBMS (Relational Database Management System) can support. Conservation of system resources was a sensitive topic in departmental level client-server applications. It becomes an essential issue as we build mission-critical systems at enterprise scale. The sheer multiplicty of components and applications in a distributed system puts pressure on any one server's resources. The dynamic nature of resource usage makes tracking and managing resources hard.
14

NIT KURUKSHETRA

WINDOWS DNA

Extending encapsulation to resources should suggest that the server, which is critically interested in conserving its resources, is best positioned to manage those resources. It is also best suited to track their utilization. The server is the only entity that has a global view of the demands on it. Consequently, it will try to balance those demands. It must also manage secure access to its resources access is a resource no less important than a physical entity like memory or database connections. The addition of the application logic tier between presentation tier clients and data servers clouds the issue of resource management. The data server knows the demands presented to it by the components and servers on the application logic tier. It cannot know, however, what kind of load is coming into that tier from the clients on the presentation tier (or from other servers within the application logic tier, for that matter). Thus, application logic tier components must practice autonomy as well. Just as a component is a client of data servers, it is also a server to presentation clients. It must pool and reuse critical resources it has obtained from other servers, as well as managing the resources that originate within it. A component managing access to a database, for example, will likely acquire a pool of connections based on its expected demand, then share them across clients that request its services. Servers in any tier, then, practice autonomy in some form. They practice native autonomy on resources they originate, and they act as a sort of proxy for servers they encapsulate. In the latter case, they must not only share resources, but also ensure that access control is respected. In the case of a database, for example, the component acting as a proxy knows the identity (and by implication the access permissions) of the requesting client. Because the component acts as a proxy, using shared resources, the data server no longer has direct access to this information. Instead, the database typically establishes access based on roles. The proxy component is responsible for mapping a particular client's identity to a role within the system as it grants access to the resources it serves from its pool.

3.2. Reliability
Computers are reliable, aren't they? Surely if we submit the same set of inputs to the same software we'll obtain the same results every time. This is certainly true from the vantage point of application programmers (hardware engineers might have a few quibbles). As soon as we open an application to the network, however, the challenge of maintaining the integrity of the overall system reliability requires our involvement. If you have ever programmed a database application, you have encountered the classic bank account example. When an application transfers funds from one account to another, it must debit the losing account and credit the gaining account. A system failure between the two operations must not corrupt the integrity of the bank. Client-server applications only had to concern themselves with the failure of the server or loss of a single connection to ensure the integrity of the application. A three-tier distributed system introduces many more points of failure . The challenges we enumerated earlier for network applications, especially connectivity, resource collection, and availability, offer the possibility of failures that are harder to unravel.

15

NIT KURUKSHETRA

WINDOWS DNA

Relational databases offer transactions within their bounds; a network application using multiple data stores requires a distributed transactional capability. If the system experiences a loss of connectivity, the transaction service must detect this and rollback the transaction. Distributed transactions are difficult to implement, but are critically important to the success of a network application. Their importance is such that distributed transactions must be viewed as critical resources requiring the protection and management we discussed under the goal of autonomy.

3.3. Availability
This goal is concerned with the ability of the network application to perform its functions. Such an application contains sufficiently many resources prone to failure that some failure must be expected during the course of operation. Optimal availability, then, requires that the network application take the possibility of failure into account and provide redundancy, either in terms of extra hardware or duplicate software resources, or in the provision for gracefully dealing with failure. A network is inherently redundant in that it has multiple computers on the network. To achieve high availablity, a network application must be designed with the known points of failure in mind, and must provide redundancy at each point. Sometimes this is a matter of hardware, such as RAID disk drives and failover clusters, and other times it's a matter of software, as in web server farms. Software detects the loss of one resource and redirects a request to an identical software resource. In the monolithic world, if we had our computer, we had our application. Network applications, however, give the illusion of availability to the user whenever their machine is available. The actual state of the network application's resources, however, may be very different. It's the goal of availability to ensure that the resources of the network are deployed in such a way that adequate resources are always available, and no single failure causes the failure or loss of availability of the entire application. Availability is the goal behind such buzzwords as "five nines", that is 99.999%. If you are going to the expense of fielding a network and writing a distributed application, you expect the application to be available. A monolithic application running on commodity PC hardware is scarcely capable of hosting mission-critical functions. Windows DNA aspires to host such functions on networks of commodity computers. Availability is a make or break point for Windows DNA.

3.4. Scalability
It would be close to pointless to deploy a network for a single-user application. One of the points of network application architecture is to efficiently share resources across a network on behalf of all users. Consequently, we should expect our network applications to handle large volumes of requests. Each of, say, 100 users should have a reasonable experience with the server, not 1/100th of the experience and performance provided to a single user of the application. Scalability measures the ability of a network application to accommodate increasing loads. Ideally, throughput the amount of work that can be completed in a given period of time
16

NIT KURUKSHETRA

WINDOWS DNA

scales linearly with the addition of available resources. That is, if I increase system resources five times (by adding processors or disks or what have you), I should expect to increase throughput five times. In practice, the overhead of the network prevents us from realizing this ideal, but the scalability of the application should be as close to linear as possible. If the performance of an application drops off suddenly above a certain level of load, the application has a scalability problem. If I, say, double resources but get only a 10% increase in throughput, I have a bottleneck somewhere in the application. The challenges of network applications and the responses we make to them distributed transactions, for example work against scalability. Architects of network applications must continually balance the overhead of distributed systems against scalability. A scalable architecture provides options for growing the scalability of an application without tearing it down and redesigning it. An n-tier architecture like DNA helps. If you encounter a bottleneck on any single machine, such that adding additional resources to that machine does not alleviate the bottleneck, you are able to off-load processing to other machines or move processing between tiers until the bottleneck is broken.

3.5. Interoperability
The challenge of platform integration arises from the fact that organizations will end up possessing dissimilar hardware and software platforms over time. In the past, organizations sought to fight this through standardizing on one platform or another. The sad reality of practical computing, however, is that standardization is nearly impossible to maintain over time. Sometimes there are sound technical reasons for introducing heterogeneous platforms a given platform may not be sufficiently available or scalable for a particular need, for example. Other times, the problem arises from a desire to protect the investment in outdated hardware. Sometimes, it's simply a matter of the human tendency to independence and diversity. Whatever the cause, interoperability the goal of being able to access resources across dissimilar platforms and cooperate on a solution is the answer. Any architecture that claims to be suitable for network applications must address the problems of differing system services and data formats that we described under the challenge of platform integration.

17

NIT KURUKSHETRA

WINDOWS DNA

4. ARCHITECTURE
Windows DNA is a set of services, tools, and three layers of application layers - presentation layer, business logic layer, and database layer.This figure represents it:

Figure 1. Windows DNA Architecture

The Windows DNA architecture provides two models for designing and building large, complex distributed systems: the logical three-tier model and the physical three-tier model. The logical threetier model is used to define and design the components of the system. The physical three-tier model is used to specify the placement of these components within the distributed system.

4.1. Logical Three-Tier Model


When we build components in a Windows distributed system, we define the components based on the services they perform. For example, the Web page running in the Web browser will perform services for the user, such as allowing the user to enter information, select information, or navigate to another Web page. Components placed on the Web server will receive requests from the Web browser to perform services such as accessing a particular Web page, searching the catalog for all occurrences of a particular word, and so on. The services that a system can perform are divided into three categories: user services, business services, and data services. Thus, you can form three tiers: the user services tier, the business services tier, and the data services tier. Each of these tiers consists of one or more components.
18

NIT KURUKSHETRA

WINDOWS DNA

In this model, sometimes also known as the 3-tier model, clients remain focused on presenting information and receiving input from users. This is known as the user services tier. Data, meanwhile, is hosted on one or more data servers in the data services tier. Only that processing required to access data and maintain its integrity gets implemented on this tier. This includes SQL query engines and transaction managers for commercial software, as well as triggers and stored procedures written by database administrators. Unlike the client-server model, however, these triggers and procedures are limited in scope to managing the integrity of the data residing on this tier. Business rules are moved to the application logic tier, sometimes referred to as the business services or middle tier. Stored procedures on the database are sometimes used to implement and to enforce business rules. While this can lead to performance gains, the approach does not go along with the purist concept of n-tier or DNA, where the data tier is kept strictly for data. The term n-tier comes from the fact that the application logic tier is often subdivided into further logical tiers dedicated to one task or another; the application tier is seldom homogeneous. Some programmers view the division as three tiers, while others view the different classes of application logic as individual tiers. Dividing a task into three or more tiers brings the following benefits: * Separation of presentation from function, * Ability to optimize each tier in isolation for scaling and performance, * Limited parallelism during development, * Reuse of functional modules, * Ability to select the appropriate platform for each task, 4.1.1. User Services Component The user services components are responsible for passing information to and from the user. Specifically, they gather information from the user and then send the user information to the business services components for processing. After the information is processed, the user services components receive the processed results from the business services components and present them to the user. Typically, the user is a person and the user services components consist of user interface components such as Web pages in a browser, an .EXE application containing forms, and the like. A user could also be another system-in which case, there's no visible interface. An example of this scenario would be an application that verifies credit card information. To design user services components, you must interview the potential users of the system. Based on these interviews, you can create a detailed description of the steps required to complete each of the tasks that the system will be asked to perform. These descriptions can then be turned into Unified Modeling Language (UML) use cases. UML is a set of models used to design object-oriented systems, although it can be used to design virtually any system. A use case is a text description of how a user will perform a task; each use case has a specific format
19

NIT KURUKSHETRA

WINDOWS DNA

such as a flowchart, a flow diagram, or a step-by-step outline in a table format. These use cases will help you organize the steps required to complete your tasks, will help you define the components your system will need to perform these tasks, and can be used as the basis of the design for your entire distributed system. For the most part, user services components are designed to meet a specific need and are not reusable. For example, the user components specifically created for entering employee information are not likely reusable in an application that's used for obtaining order information from customers. Besides communicating with the user, most applications must process data. For these applications, you will also need to define a set of business rules that govern how data is handled. Examples of business rules could be the requirement that every customer have a name, a customer ID, and a billing address. Business rules are similar to use cases, except that they are text descriptions of the requirements to be met to ensure a business is run correctly. In a two-tier architecture model, some or all of the business rules regarding data processing are placed in the user services components. In a three-tier architecture model, the business rules are moved from the user services components to the business services components. 4.1.2. Business Services Components The business services components receive input from the user services components, interact with the data services components to retrieve or update data, and send the processed result to the user services components. The business services components ensure that the business rules are followed. For example, a user services component may ask a business services component to save a new customer record. The business services component will then verify that this new record fulfills all of the rules for a new customer record. In another example, the user services components might request that the business services components retrieve a particular Web page containing data from a database. The business services components will get the data and format it according to a set of rules and then build an HTML or XML page for the user services components. If the business rules change, only the business services components that contain those rules will need to be changed. If the business rules are placed within the user services components, the user services components will have to be updated every time a rule changes. If such a user services component were an .EXE application installed on thousands of computers, a change in the business rules would mean reinstalling the component on all the client computers. Separating the business rules from the user services components is only one benefit of using business services components. Business services components usually contain logic that is highly reusable. For example, if you create a business services component to retrieve product information for a Windows-based application, you don't need to rewrite this component if you want to create a Web-based application that requires the same business logic. Using business services components also simplifies the building of your user services components. As previously mentioned, the business rules in the three-tier model are moved from the user services components to the business services components. Thus, the user services components are simplified. In addition, if you build business services components as objects that expose methods and properties, your user services components can be simplified further
20

NIT KURUKSHETRA

WINDOWS DNA

since the code in your user services components will consist of setting properties and calling methods of the business services objects. For example, let's say your application requires validation of new orders. You can implement the validation logic in the user services components, but you can also separate this logic from the user services components and implement it in an Update method in the business services components. Thus, the validation of a new order will be done when you call the Update method. 4.1.3. Data Services Components The business services components usually handle data processing. To get the data, they make requests to the data services components. The data services components are responsible for storing and retrieving data and maintaining data integrity. They communicate directly with a data source. A data source can be a database, a text file, an XML file, an XML document stored in memory, and so on. When a system performs the Create, Read, Update, and Delete (CRUD) operations, the business services components will validate these operations. If the operations are valid, the business services components will make a request to the data services components to actually perform the operations. If the CRUD operations are invalid, the business services components will raise an error and will not have to communicate with the data services components. A common confusion associated with the DNA model is between the data services component and the database. A data services component can be either a component that is running on a server (usually not the database server) or a stored procedure. Often a system will have the data services tier that consists of both stored procedures and components. (Some authors refer to the database as a "fourth tier of the logical model.") Either way, the actual physical database is not the data services tier, although it may contain the data services components in the form of stored procedures.

Like business services components, data services components are highly reusable. For example, an e-commerce application, an order entry application, and a reporting program for management could use the same data services component that performs CRUD operations on customer records.
21

NIT KURUKSHETRA

WINDOWS DNA

Over the past year, the view on stored procedures has been changing. Earlier it was generally believed that stored procedures were the best location for data services components. Stored procedures could use the security features of the database and be centrally controlled by the database administrators. The major problem with stored procedures, however, is scalability. Replicating databases is difficult and usually inefficient. If you build all of your data services components as stored procedures and the server you are using runs out of memory or other resources trying to run these stored procedures for thousands of users, you cannot easily build a second database server and share the load between the two database servers. Your only option is to upgrade the server. If you reach a point at which the server is already at the maximum capacity the current technology can offer, your server will no longer be able to handle the required loads. If you build data services components that are placed on a server other than the database server, you can increase the capacity of the system by adding another server. If you use some method to balance the load between the servers, the business services components can call whichever server has the least load. Since the data services components are performing services only for the business services components, it doesn't matter on which server a data services component performs the service. Of course, if you don't build data services components as stored procedures you will probably not be able to use the database's security mechanisms directly. Instead, you will have to implement security features within your components. This security code should go into the business services components, as they will contain the rules for validating users. 4.1.4 Connecting the Three Tiers So far by using Windows DNA, you can split your application's services into three parts: user services components, business services components, and data services components. In Windows DNA, tying up together these components is done by exposing system and application services through the Component Object Model (COM).

4.2. COM (Component Object Model)


COM is without doubt, the center of the Microsoft universe. COM was designed to solve a number of problems that have existed for sometime, not just in the Windows sphere but also within the development and application spheres. Code re-use is a nirvana that developers endure to achieve. However there have been very few situations where that could be realized. Take for instance an implementation of a solution in a C++ application. The developer builds generic classes to address a problem. From then on, the developer can re-use the code and can share it with others. While this looks good in theory, some problems arise in practice. First, the new code is language specific. So, if the second developer ONLY knew Visual Basic, the newly created C++ classes would be of no use. In COM, language independence is achieved.
22

NIT KURUKSHETRA

WINDOWS DNA

Encapsulation is also a target of the COM specification. If a developer builds an object that exposes some functionality and the new code becomes publicly consumed the interfaces exposed and used cannot subsequently change. If the exposed implementation details changed we could face the real prospect of causing a problem we had strived to avoid. By the nature of the COM specification objects provide methods to expose their implementation details and allow dynamic discovery. This enables such facilities as scripting languages to use such functionality without having to bind details in an early fashion. Coupling implementation details in such a fashion is called late binding. One of the more important concepts of the COM model enables object implementations in the notion of location transparency. When an application calls an object's interfaces the application need not know whether the actual code is being executed locally or on a distributed machine. This location independence is provided through the use of proxy objects that sit between locations and marshals information between their instances. In the DNA methodology COM is a very important strand, involved in all the products and implemented directly, and natively, within the NT platform upon which the DNA technical implementation runs. However, DNA isn't COM alone, so we shall discuss where the other product fit and what they contribute.

4.3. Physical Three Tier Model


The physical three-tier model includes the actual computers on which the components will be placed. There are three groups of computers: client tier, middle tier, and database tier. The client tier computers provide user services, consisting mainly of the user interfaces. The middle-tier computers provide business services, consisting of enforcement of business rules and data validations. The database tier computers provide data services, consisting of the data and the ways of accessing and maintaining the data. Several years ago, the two-tier model, also called client server, was popular. This model consisted of a client tier that communicated with a database tier. This model required the client computer to maintain a continuous connection to the database. The database could keep track of what records were being edited and place locks on these records so that only one person could change a record at one time. Locks insured that the data would be consistent. The main drawback of this model was that the systems were not very scalable. Since developers wanted to build systems that could scale to hundreds and even thousands of users, they needed another solution. The solution to making systems more scalable was to break the connection the client had with the database. Originally, the data was passed to the client computer where CRUD operations could be performed on the data. If the data was updated, this data could be sent back to the database. With this disconnected model, the database could no longer lock records because the client had no connection to the database. Thus, the database had no way to reconcile inconsistencies. An alternative to locking records was developed that allowed the client to communicate with an application located on the server that could make changes to the database. This application is the data services component we have been discussing in this
23

NIT KURUKSHETRA

WINDOWS DNA

section. Instead of running this application on the database tier, the middle tier was added to host the data services components. The idea of a middle-tier computer had existed long before the use of a disconnected model. In non-Windows operating systems, applications were commonly placed on a central server and used by many clients. These clients often had little functionality and were called dumb terminals. The middle-tier computer can still be used to host components, such as the business or data services components that can be used by multiple clients, but now modern clients are usually fully functioning and have many applications running on them.

4.4. Stateful vs. Stateless Components


One of the most important aspects of creating scalable components is the use of stateful and stateless components. A stateful component can retain information from one call to the next. For example, a Microsoft Visual Basic .EXE application can be created that keeps track of the currently logged in user from the moment the user logs in until the time the user logs off. State is stored by using properties. In our Visual Basic example, we could use a read/write property named userName and a write-only property named userPassword. A stateless component has no memory from one call to the next. For example, a data services component that retrieves customer records retrieves customer records for one client and then retrieves records for a second client without any memory of what it retrieved for the first client. A stateless component does not have any public properties. In the physical model, components that are placed on the middle-tier server and communicate with the client computer should be stateless. If a component on the middle-tier server maintained state (that is, retained information between calls from the client), these components would need to maintain information on every client that made a request to the component. As the number of clients making requests increased, the information that would need to be stored in memory would increase until the server ran out of memory. This would not be a scalable solution. If components on the middle-tier and database servers do not maintain state, they can quickly perform a service for one client and then perform a service for another client. This feature works even better with a technique called just-in-time activation (JITA). With JITA, a component becomes active immediately before it is used. Once the component has been activated, it will quickly perform a service and then deactivate. As soon as the component has finished with one client, it is available for another client. These components can be built so that they require a minimum amount of time to perform their services. They reside in memory for only a brief time to service each client. Because these components are stateless, they will not require any memory usage between client calls. Thus, a stateless component that uses JITA will use the absolute minimum amount of server resources while still servicing a large number of clients. These types of stateless components can scale very well.

5. ADO.NET and Windows DNA


24

NIT KURUKSHETRA

WINDOWS DNA

Similar to COM+, the CLR is the heart of the .NET framework. Here I will not go in more depth of .NET architecture but .NET sure provides more tools and services for developers than the Windows DNA. In this article, I will try to limit this article as an ADO.NET introduction from a Windows DNA developer's perspective. ADO.NET components are applicable in all layers of Windows DNA. Even though there is no relation between Windows DNA and ADO.NET nor it was .NET was designed to fit in Windows DNA model but developers who have been developing application based on the Windows DNA guidelines may want to compare the ADO.NET functionality with the present .NET model. In .NET, the representation layer normally is represented through Windows Forms and Web Forms. Windows Forms is a framework to write Windows GUI desktop applications. Windows Forms work pretty similar to Visual Basic forms. The Web Forms with the help of ASP.NET (new and extended version of ASP) provides services and classes to write Web based GUI applications.

Figure

shows

the

ADO.NET

components

in

different

layers

of

Windows

DNA.

As you can see from the above diagram, the database layer itself has no changes. You can still use same databases and apply same logic on the database itself as you used to do prior to .NET. ADO.NET in business layer is little more complex and most of the ADO.NET components belong to this layer. As you can see from the above figure, in ADO.NET, you connect to a data source through Data Connection and Data Adapters. You can either create a Connection object explicitly or attach it with a DataAdapter or you can direct create a DataAdapter with a connection at design time with the help of designer, which attaches a connection to a
25

NIT KURUKSHETRA

WINDOWS DNA

DataAdapter. DataAdapters provide functionality to fill data from the data source to DataSet objects and write data back to the data source through DataSets. A DataSet object can be used to connect to data-bound controls used in Windows Forms and Web Forms such as a DataGrid. XML Schemas play a vital role. XML is the way to transfer data from a data source to a client. Actually a DataSet object is a disconnected data stored in the form of XML on your local machine. No matter if you use Web Forms or Windows Forms, when you read data in a DataSet, data stored as a disconnected data and you can do data-access operations (read, write, and update) on this data without even an connection with actual data source. After doing your data-access operations, you call DataAdapter to save the change data back to the data source. A DataSet is a collection of DataTables. A DataTable is an in-memory representation of a database table. The DataTable provides functionality to navigate through each rows or columns of a table and work with them, which means you can read, write, and update data to a DataTable based on rows and columns. This is the ADO's Recordset replacement in ADO.NET. Data can be accessed from a data source through Command objects, which are used to execute SQL statements. After executing SQL command, data is read in DataReader objects, which are read only objects and fill the result of SQL statement executed using the Command object. Data can also be saved and accessed in XML documents through .NET XML classes, which are independent of ADO.NET classes. But ADO.NET provides a way to read and write XML documents to a data source back and forth. The Presentation layer of the .NET framework is much more powerful and flexible than the Presentation layer of Windows DNA. Windows Forms, XML, Web Forms, and ASP.NET provides tools and services to write much more robust, fast, better design and much improved than the versions used in Windows DNA.

6. NATURE AND NEEDS:


6.1. Internet Application Design
Basic internet application designs are: 6.1.1. Internet Application Requirements In general, an Internet-based application will: * Present a unified view of data from multiple data sources
26

NIT KURUKSHETRA

WINDOWS DNA

* Allow a user to update data * Provide full e-commerce capabilities * Be fast * Be scalable to thousands of concurrent users Now let's take a look at each of these characteristics in a little more detail. 6.1.1.1. Presenting a Unified View of Data From Multiple Data Sources Presenting a unified view of data coming from multiple sources creates a number of problems. A lot of businesses have multiple types of data stores, ranging from mainframe-based VSAM applications, to SQL Server databases, to Oracle databases, to e-mail stores and directory services. There needs to be a way to "tie" all of this data together. So obviously we are going to need a robust data access mechanism, which allows us to access multiple types of data sources that might even reside on different platforms. In addition, we might need host, or mainframe, integration capability. Just getting data from a mainframe application to a web page is a huge technical feat. 6.1.1.2. Allowing a User to Update Data If our application allows a user to purchase something, or initiate financial transactions, or update personal data, we're going to need transactional capability. By having transactional capability, we need to somehow make certain that either all parts of a piece of work, or transaction, complete successfully or none of the transaction is allowed to occur. To make matters more complicated, we already know that the likelihood of having data in multiple sources is fairly high. The consequence is we will need the ability to define transactions that span multiple data sources, while still having full two-phase commit and rollback. 6.1.1.3. Full e-Commerce Capabilities Providing full e-commerce capability is a must these days. If you're selling products over the Internet, you will need a framework that provides a shopping cart, and management tools to manage product catalogs, run promotions and sales, and present cross-selling opportunities to your users. Additionally, you will need this framework to be extensible so you can incorporate your own business logic, such as calculating taxes in foreign countries. It would also be really nice if this e-commerce framework used the same transactional capability described above when users commit their purchases. 6.1.1.4. Fast You might get users to your web site the first time, but if your site is so slow that they have a bad user experience, they might never come back. So you're going to need to be able to architect a solution that solves your business problem, but is fast at the same time. Being able to distribute requests among many machines is one way to help achieve a speedy solution. Other design characteristics, such as performing work asynchronously, can also help speed up things.
27

NIT KURUKSHETRA

WINDOWS DNA

An example, albeit a crude one, might be an online purchasing application. When a user actually places an order, they probably don't need any kind of response other than "we received your order, here is your confirmation number". You could then place the order on a queue and process it later. The user doesn't know their order won't be processed until later, but they've got the result they wanted quickly. 6.1.1.5. Scalable to Thousands of Concurrent Users

Not only does your site need to be fast, it probably needs to support thousands of concurrent users. Again, load balancing across multiple machines will help solve this problem. Not only will load balancing help you handle more users, it will also improve your "uptime", and help ensure you are running 24x7x365 like all users expect these days. 6.1.2. Platform Capability Requirements So now you have a better picture of some characteristics of applications being built today. There's obviously a lot of infrastructure needed to build these applications. What we can do now is take these characteristics, and from them create a list of the capabilities we're going to need from our platform. These include: A web server Transaction processing and object brokering Message queuing A data access mechanism Security Load balancing capabilities

We'll see later in the book that Windows 2000 provides these, but that's jumping the gun a little. We should probably turn our attention first to identifying the things that drive us when designing applications.
28

NIT KURUKSHETRA

WINDOWS DNA

6.1.3 Application Design Goals For most Internet-based applications you can usually identify two sets of goals: Business Goals: Faster time to market Low cost Architecture Goals: Interoperability Adaptability 6.1.4 Other Goals
o

Faster time to market Why build it if you can buy it? If you're a software developer in this Internet world, this should be the first question you ask yourself on a daily basis. Time is too precious to be spent building infrastructure or functionality you can buy off the shelf. You need to spend the majority of your time solving the business problem at hand, and not worrying about how to build infrastructure such as a transaction-processing monitor. Do you get the idea? Low cost whether you're working at a newly formed dot-com company or a Fortune 500 corporation, cost will always be an issue. The question we asked earlier, "Why build it if you can buy it?" should have a cost element attached to it. The next question you probably need to ask is "How much will it cost to buy this infrastructure?" Wouldn't it be nice if you could get as much infrastructure as possible from the operating system? In other words, if you pay good money for an operating system, wouldn't it be nice to inherit a great deal of functionality without paying extra for it? Not only is it cheaper to buy infrastructure, the people signing your paycheck are going to be happier if you spend your time solving business problems instead of unnecessarily building infrastructure. Interoperability unless you're working at a brand new company with no computer systems at all, you're going to need to make more than one piece of software "talk" to another piece of software. As stated earlier, you're going to have to integrate systems that reside on multiple platforms, and present a unified view of data to an end user. From an architecture standpoint, we need to be able to produce a set of services that allow us to integrate disparate computer systems on disparate platforms, whether it be the mainframe, UNIX, AS400, etc. Instead of reworking existing systems so they can be used on the Internet, we want to leverage existing investments in these existing systems. We can also take advantage of software that is created for a special purpose, such as accounting software or bill-processing software, and just build an interface to "talk" to it. Adaptability the world is moving pretty fast these days and the pace of business is keeping up with it. Just when you think you have done a good job of gathering requirements and turning them into functional specifications, the situation changes. The software we write today has to be able to be changed quickly to meet rapidly changing requirements, and take advantage of opportunities when they arise. If you are reading this book, you probably agree with the concept of n-tier architecture or a component-based architecture. We
29

NIT KURUKSHETRA

WINDOWS DNA

need an architecture and a platform that allows us to take an application and separate it into tiers, or layers, of functionality. Where have you heard that before?

6.2. Network Application Characteristics


We've just said that network applications break their implementation into functional modules and rely on (or at least substantially benefit from) the presence of a network of computers. Some characteristics follow from that: Communications Concurrent usage State management Latency Rigorous encapsulation

6.2.1 Communications The first point is essentially a given one if my applications work through a network, they must have some means of communication. However, as we'll see in a little while, communications can become a deep topic. There are issues of protocols and data formats that arise in network applications. Simply running a cable between two computers and configuring them for the network is the easy part. Life for the application programmer can get very interesting. 6.2.2. Concurrent Usage You could deploy network applications in a single user manner. You might insist that every client be matched with an individual server, or you might have a multi-user server that forced clients to wait for service in series. This would simplify the programming task, but it would also negate many of the benefits of using networks in the first place. We want applications to dynamically access some bit of software on the network, obtain service, and go about the rest of their processing. Forcing them through the iron gates of single-use software would mean incurring all the overhead of distributed processing while also incurring the limitations of standalone software. You'd feel cheated, wouldn't you? Even if you don't develop multi-user software, you rely on concurrent access to system services and network servers. I can write a web page with single-user client-side script, but I want the web server it accesses to be able to accommodate multiple users at the same time. Could you imagine a corporate database application denying a user service because one other user somewhere in the organization was already connected? Some part of a network application, then, must handle the tough tasks of concurrent access. These include multithreading, concurrency, and integrity. Multithreading is what enables a single piece of software to have more than one task underway in a program at any given time. Concurrency is what keeps one task distinct from another. Most importantly, concurrency concerns itself with how to maintain the integrity of the data or process, when different users want to modify the same bit of information.
30

NIT KURUKSHETRA

WINDOWS DNA

6.2.3. State Management State management is closely related to concurrency. Technically, this is another facet of concurrency, but we'll consider it as a characteristic in its own right because this is a topic that almost all programmers will encounter when writing network applications. If an application is using other applications or components on the network, it must keep track of where it is in the process the state of the process. Single-user, standalone applications find it easy to maintain state. The value of the variables is the state of your data, while the line of code that is currently executing defines where you are in the overall process. Multiuser, distributed applications have it harder. Suppose I have an e-commerce web site whose implementation involves sending a message to another application and receiving a reply. I have to maintain a set of data for each user. When I send a message, I have to record where that particular user is in the overall process, together with the current state of that user's data. When a reply comes in, I have to be able to determine which user is affected by the reply and retrieve the data I saved. If you've worked with the Session and Application objects in Active Server Pages, you've programmed state management information. You've told the ASP component to keep track of something you'll need again later. The more widely distributed you make your application, the more state information needs to be coordinated. It's best to try to minimize state information on remote servers using a stateless server model, which we'll see a little more about later in the book. 6.2.4. Latency How long does it take to communicate with other components on the network? The time attributed solely to the network is the network latency of the application. This would seem to be too small to worry about at first glance. How fast are electrons in a wire? It turns out that for practical purposes, the speed of electrons in copper wire is slightly less than the speed of light. A good rule of thumb is 200 meters per microsecond. Surely that's good enough, you might say. In a standalone application, though, the time to access a function might be a fraction of a millisecond. Now measure the path through your network seldom a straight line on a LAN and multiply by two. Add the time imposed by routers or switches, and you find that networks have latency that is significant compared to the time to execute instructions within a component. Latency is especially important for Internet applications. The distance alone is significant. A round trip across the United States should take, in theory, 50 milliseconds. But that's a direct hop. When I send a packet from Philadelphia to a particular server in San Francisco, I find that it takes almost three times as long to get there and back. My packet is bouncing around my service provider, then heading south to the major interconnect MAE East in Virginia, then making its way across country. Each router or switch takes its toll along the way.
31

NIT KURUKSHETRA

WINDOWS DNA

This is an extreme case, but even crossing the office is more expensive than moving between addresses in a single computer's memory. Programmers and architects need to consider how they can minimize the number of times their systems need to call on a remote server if they want to maintain acceptable performance. Latency changes the way we design applications. 6.2.5. Rigorous Encapsulation Encapsulation is a technique in which you hide or encapsulate the details of some implementation from some software using that implementation. Object oriented programming is a classic example of encapsulation. An application using an object has no idea how the object maintains its data or implements its methods. Structured programming, the traditional functionby-function method of building an application, may also practice encapsulation by hiding the implementation details of a subroutine from the main program. A programming Application Programming Interface (API) is an encapsulation. In a standalone application, encapsulation was a good idea. It helped programmers develop and maintain software effectively and efficiently. Network applications have no choice but to practice rigorous encapsulation different programming teams may write the components of a network application. One team may not have any influence over another, or even know who wrote the component. If I were to write a shipping application that relied on information from the Federal Express tracking application on the Web, for example, I would have no choice but to use their application using their HTTP-based API, as I have no other access to the application. Certainly, I cannot call Federal Express and ask them to make some internal modifications for me. Network applications live and die by clean interfaces, behind which implementation details are encapsulated. We're now going to take a possibly familiar trip down memory lane, tracing the history of applications from monoliths to component-based distributed applications. You may have seen it before, but it's still necessary to discuss this because it's central to DNA's concept. The earliest computer applications and many of the applications still in use were monolithic. All the logic and resources needed to accomplish an entire programming task were found in one program executing on a single computer. There was neither need nor provision for a network. As computer science evolved in its capabilities, the desirability of the client-server> model became evident. Clients would obtain critical services, either data or computing, from server software that usually resided on another computer. As networks approach ubiquity, though, the advantages and challenges of distributed computing emerge. The basic model for addressing the challenges is variously called the 3-tier or n-tier model. These models of computing did not spring out of a vacuum. Rather, each evolved from its predecessor as the challenges of the old model were solved, thereby uncovering new challenges.

32

NIT KURUKSHETRA

WINDOWS DNA

7. USING WINDOWS DNA


Setup for Microsoft or 100-Percent Compatible Windows must be installed as "Microsoft or 100% Compatible" when prompted for the network configuration. Network Board Software Needs to be Revision H for Windows 3.1 Running Microsoft Windows operating system version 3.1 under the DNA Networks software may produce some problems unless you have updated to version 3.38 (Rev. H) of DNA Networks proprietary boards or to version 5.02 (Rev. H) of NetBIOS boards. The following problems occur without the update: When running Setup /N from a workstation without a hard drive, the "Install Applications" section of Setup can only modify the path, not the drive. After entering either the graphical portion of Setup or when running Windows, floppy drive A no longer is accessible and may cause of the following error messages: Current drive is no longer valid. Cannot load COMMAND.COM. Insert disk with batch file. Network directories may not display in the following applications' dialog boxes: AMI PRO version 2.0
33

NIT KURUKSHETRA

WINDOWS DNA

Versions 2.1d and 3.0 of Microsoft Excel for Windows Version 2.0 of Microsoft Word for Windows WordPerfect version 5.1 Installing on DNA's Proprietary Boards If you are installing on DNA's proprietary boards (as opposed to NetBIOS boards such as EtherNet), the following information applies: 1. If you are using DNA's MegaNet board, Windows must be told not to use the same area in memory that the board uses for the MegaNet Window (a 16K page). To find out which page is being used, start the machine without loading any memory managers and read the DNA device driver banner. If it is says MegaNet, then note which page it says it is using. Exclude this page from Windows by adding an EMMEXCLUDE= entry to the [386Enh] section of the SYSTEM.INI file. For example, if the DNA banner reads 2. MegaNet, RAM Window=C000 then add EMMEXCLUDE=C000-C3FF to the SYSTEM.INI file. If the DNA banner reads MegaNet, RAM Window=C800 then add EMMEXCLUDE=C800-CBFF to the SYSTEM.INI file. 3. Each machine that is running Windows should be using the DNA program WINFIX.COM. WINFIX.COM should be installed in the AUTOEXEC.BAT file and can be found on DNA distribution disk 1 in the FIXES subdirectory. This will keep DNA from displaying the error Unsupported NetBIOS call when Windows is executed. It will also allow Windows to properly detect network printers and drives that have been redirected over the DNA network. 4. DNA networks do not support Windows running in 386 enhanced mode as the master (server) if a DNA MegaNet board is installed running in MegaNet mode (the board is using a RAM window). Windows can still be run in 386 enhanced mode at workstations. 5. Print Manager functions are not supported. The DNA network utilities PRINT.SYS and SPOOL.SYS (together with the SPOOL command) provide the same functionality. Persistent Network Connections If you are using DNA network software on a machine without a hard disk, and you are using persistent network connections under Windows 3.1, you may not be able to use drive A after Windows is started. You will need to restart the computer to gain access to drive A. DNA has confirmed this to be a problem with its network driver. An updated driver can be obtained from DNA.
34

NIT KURUKSHETRA

WINDOWS DNA

To disable persistent network connections: 1.Run Control Panel. 2.Choose the Network icon. 3.Clear the Restore All Connections At Startup check box. 4.Choose the OK button to save your changes. The products included here (other than Excel, Windows, and Word) are manufactured vendors independent of Microsoft; we make no warranty, implied or otherwise, regarding these products' performance or reliability.

8. INTERNET INFORMATION SERVER & ASP


IIS, Internet Information Server, provides the DNA application with the rendering engine needed to publish application content to browsers. By using HTTP to deliver a lightweight presentation layer, the DNA application gains flexibility, quicker implementation and can support and deploy to an unlimited number of clients. At the same time, DNA applications are required to interact with business components to perform some business functions. The results are then conveyed via the response to the browser clients. However, a web server that only returned static HTML pages would be of no use within the structure of DNA. With a need to call COM objects and via such objects access databases, file systems, line of business application servers such as Email, work flow and document management repositories; a flexible, on-demand interpreter is required to succinctly implement the communication between the business components within the Business Logic layer and the Web Server in the Presentation layer. This "glue" is Active Server Pages and the scripting model. Before discussing the role of scripting and ASP in the DNA application, the following lists the other functionality IIS provides within the DNA framework. * Professional web server functionality. * Reliable content delivery. * Multiple site support. * Tight integration to security provided in NT. * Performance monitoring. * Auditing of client requests and web server actions. * Scripted pages. * Tight integration into Transaction Server and COM. * Fast deployment. * Simple change implementation.
35

NIT KURUKSHETRA

WINDOWS DNA

* Scalable delivery, add more web servers to add more capacity. Active Server Pages and the scripting environment act as the glue between the Web Server and it's content delivery, the business components and even in some cases the data layer (in some circumstances the data and business layers are transparently implemented very close together. By allowing scripts to be embedded into standard HTML files many possibilities are raised, some of these are covered in other sections of this tutorial. With scripting and ASP it is possible to call the business layer objects, call data sources and manage work flow process while being able to respond back to the client any information necessary.

9. BUSINESS LAYER TECHNOLOGY


The Business layer (or middleware layer) incorporates the logic for encapsulating the business process. Within this layer it is the responsibility, from the Microsoft perspective, that COM and MTS (Microsoft Transaction Server) interact with other Distributed coordinated services to provide transactions for business "state" information. For example, while IIS and ASP deliver content COM objects either running with MTS or other middleware technologies, would provide the logic for managing the business process. Microsoft Transaction Server provides support for distributed transactions and the coordination of those transactions. It also supports the notion of packaging components and web applications together within the same solution to provide faster access and Just In Time activation for their component parts. MTS also provides a roles management system to enable a more "work flow" style approach to running the business components. Microsoft also includes a Message Queue service. MSMQ aims to provide a queue and state management solution via a guaranteed delivery mechanism. The purpose of which is to enable designers to implement state machines and queues without repeat development. This frees developers from having to re-implement messaging technology for each solution.

36

NIT KURUKSHETRA

WINDOWS DNA

10. DATA ACCESS WITH ADO AND OLEDB


The data access layer can only be useful if the tools used to access those repositories are consistent and simple. To address this issue Microsoft created several levels of technology. In the simplest form, DNA needed to provide Business layer components with the facility to gain different levels of access to data. These layers provide simple abstract methods to access the data source tight through to technologies to implement drivers to custom data sources. The primary data source access is provided via ActiveX Data Objects (an agreed variant of a COM object, one that supports particular implementation details. ADO abstracts the details of the implementation of data access and constructs a consistent data model. Using COM as the delivery mechanism for drivers and consumers means that an agreed supporting infrastructure can be implemented. For example, because ADO abstract data source details away from the consuming developer it is possible to write data access routines that work for several differing data systems without change. Those data system can be very different. ADO uses the facilities of OLE DB to provide the unified data access. In fact, ADO is the higher-level data access from Universal Data Access technology group. Within the UDA there are a number of data access technologies, however, ADO and OLE DB are the primary focus in DNA applications. OLE DB is the underlying technology that ADO and other Data Source technologies use. OLE DB is a specification and implementation for building drivers for both aspects of the data access problem. Providers are the implementation of a driver to enable Consumers to access the data. OLE DB enables providers to work against a code base and conceptual framework in order to enable conformance to a data object model that all consumers can access. When all data source providers implement their access in the same manner, through similar standardized interfaces consumers will be able to use a "code template" to access any provider.

37

NIT KURUKSHETRA

WINDOWS DNA

11. LATEST RELEASE


WINDOWS 2000 In,Sept. 13, 1999 layed the foundation for a new era of Web development, Microsoft Corp. today announced Windows Distributed interNet Architecture (Windows DNA) 2000, a comprehensive, integrated platform for building and operating state-of-the-art distributed Web applications as well as the next wave of Internet-based Web services. This new breed of Web services represents an evolution from today's Web sites that simply deliver pages to a browser. Richer, more personalized and more proactive, these sophisticated Web services can directly link applications, services and devices with one another over the Internet. Uniquely, Microsoft is creating tools and infrastructure to make Web services into reusable, universally programmable building blocks that can be easily created, combined and augmented by the millions of developers around the world. Once programmable, Web services become another piece in the assembly of solutions that can span multiple software components, business processes and applications anywhere on the Internet. With the Microsoft Windows 2000 operating system as its cornerstone, Windows DNA 2000 advances the core Windows DNA platform with new products, tools and technologies. As the next generation of Windows DNA, the Windows DNA 2000 family of solutions preserves and amplifies both existing customer investments and the core values that have made Windows DNA popular. It provides a comprehensive and integrated set of application services that work well with one another. The platform is flexible and allows rapid development and adaptation of sophisticated applications. And high-productivity tools support multiple programming languages and skill sets to tap the deepest developer talent pool in the industry. Further, it extends the platform to support the development of programmable Web services that stitch together multiple applications, services and devices anywhere on the Internet, forming a new breed of "megaservice" that works behind the scenes to perform actions on behalf of other Web sites and services. While companies are eager to interconnect services and integrate applications with business partners over the Internet, the solutions available are difficult to implement and suffer from a lack of tools and common conventions to span the heterogeneous makeup of the Internet. The new industry standard eXtensible Markup Language (XML) offers a lingua franca for integration across the Internet that is not constrained by the Internet's diversity of underlying operating systems, object models or programming languages. Windows
38

NIT KURUKSHETRA

WINDOWS DNA

DNA 2000 builds upon XML as its fundamental foundation to put the resources of the entire Internet within reach of developers. With Windows DNA 2000, Microsoft is focused on providing developers with a consistent programming model to harness the smallest devices, personal computer software and Internetbased megaservices to work together in a single solution. The technical approach leverages existing hardware and software investments in a way that preserves their strengths and enables developers with the broadest variety of skills to take advantage of the platform. The programming model embraces the key philosophical tenets of the Internet, such as simplicity, decentralization, message-based communications, and universal reach through protocols and formats. The result is an environment that is easier to program, easily accessible to the development community, builds on the intrinsic scalability and reliability of the Internet, and relies on open Internet standards for interconnection with resources anywhere on the Internet. "Just as 'browsing the Web' provides access to an almost infinite array of content, 'programming for the Web' will give developers an almost infinite array of building blocks to enhance or combine in any fashion to create exciting new solutions," said Steve Ballmer, president of Microsoft. "Making developers successful is central to Microsoft, and with Windows DNA 2000 we're focused on providing the very best tools and infrastructure to enable an explosion of services similar to the explosion of applications brought about by the PC and the explosion of content brought about by HTML." Microsoft Windows DNA platform, including technologies provided by the Windows NT Server network operating system, SNA Server, Site Server Commerce Edition, Microsoft SQL Server TM and the Visual Studio development system, is one of the most widely used and fastest-growing Internet platforms. Of the top 500 Web sites according to Media Metrix (July 1999), 26 percent of their home pages are built on Windows NT, the most popular platform of any vendor, and this number has increased 25 percent in the last six months. Among sites offering more sophisticated services like e-commerce, adoption of Windows DNA is even higher. Netcraft statistics show 43.5 percent of secure Web sites on the Internet as a whole are built on Windows NT. Further, of the top 50 "shopping sites" reported by Media Metrix, over half run on Windows NT, including e-commerce heavyweights like Buy.com, Dell Computer Corp., Drugstore.com and Ticketmaster. "The Microsoft platform was a clear choice for us," said Tom Page, MIS manager for Nordstrom.com. "It had all the key elements we were looking for in one integrated platform: the ease of implementation, the extensibility and development tools, the speed, the scalability, and the ability to customize site features to fit what we need today and next year. And we have the confidence that Microsoft is addressing the needs of the Internet commerce marketplace." Windows DNA 2000 extends the platform in several new directions. There is an across-theboard investment in XML for integration and interoperability as well as transparent integration with a wider variety of legacy systems. BizTalk TM Server brings business process integration capabilities to the platform, and the new AppCenter Server simplifies deployment and operation of Windows DNA-based applications across high availability server "farms."

39

NIT KURUKSHETRA

WINDOWS DNA

12. APPLICATIONS
The Windows DNA 2000 family of solutions includes the following: Microsoft Windows 2000 : The core Windows DNA services, including the COM+ component model and services, the high-performance Internet Information Services Web server, Active Server Pages, transactions, messaging, data access, clustering and IP load balancing services, are now integrated into the operating system for greater consistency, easier management and faster performance. Microsoft Commerce Server 4.0. : The next generation of the industry's leading packaged business-to-consumer commerce software provides deeper personalization, expanded site analysis and new product catalog features. Microsoft BizTalk Server : A business process integration solution that supports the BizTalk Framework, the BizTalk Server integrates applications within the enterprise and between businesses across the Internet through the exchange of XML-formatted business documents. Microsoft "Babylon" Integration Server : This provides bidirectional network, data and application integration with a variety of legacy hosts. Microsoft AppCenter : A new product that makes deployment and management of Windows DNA-based applications across high availability server "farms" as easy as managing a single server, AppCenter makes it easy to configure and manage arrays of servers. Microsoft SQL Server "Shiloh." : The next generation of the popular SQL Server 7.0 database adds native XML support and integrated datamining capabilities, and takes full advantage of Windows 2000 for even greater scalability and availability. Microsoft Visual Studio : The world's most popular set of development tools, spanning multiple languages, provides a common development environment for Windows DNA. It now includes the Windows 2000 Developer's Readiness Kit so developers can take full advantage of Windows 2000.

40

NIT KURUKSHETRA

WINDOWS DNA

13.FUTURE PROSPECTS
13.1. W DNA (yet to be released)
13.1.1 Introduction W-DNA is a Workflow Repository Server for Windows Workflow Foundation. The objective of this project is creating a single interface, a service and a server for hosting workflows, connecting the workflows with the external world, and creating a execution plan for Workflows. The Server will serve as Workflow Catalog as well, where you can publish, group and describe your Workflows in a descritive way. W-DNA, or Workflow-DNA define the functions across the enteprise, where onve you develop the enterprise workflows, ou can out these workflows toghether in a repository, and coordinate the workflows activities. The first mindset for this project is creating a toolset for: - Workflow full Catalog organize and grouping - Workflow Upload - Workflow Database storage - Workflow Repleaceble services - Workflow Hooks, delegates and events - Workflow Orchestration - Workflow Common services - Workflow Hosting - Workflow Statistics - Workflow Execution Plan - Security So, why W-DNA ? The sistematic functions across the enterprise represents the "core-business" ativities. Once you use the workflows to describe all of your business process, your workflows will be the Corporate enterprise level DNA activities and functions or simply Workflow DNA.

41

NIT KURUKSHETRA

WINDOWS DNA

13.1.2 Requirements Build To build correctly DNA Workstation you need : Visual Studio 2008 Nasm* BFI* * executable directory must be in your PATH variable Run In a virtual machine You can use Bochs, VMWare, Qemu, Virtual PC or VirtualBox Create a new virtual machine and set Floppy.img as floppy image

13.2. ExcelDna
13.2.1 Introduction ExcelDna is an open-source project to integrate .Net into Excel. The primary target is the Excel user who currently writes VBA code for functions and macros, and would like to start using
42

NIT KURUKSHETRA

WINDOWS DNA

.NET. Also interested would be C/C++ based .xll add-in developers who want to use the .NET framework to develop their add-ins. ExcelDna is free for all use, and distributed under a permissive open-source license that also allows commercial use. ExcelDna is developed using .NET 2.0, and users have to install the freely available .NET Framework 2.0 runtime. The integration is by an Excel Add-In (.xll) that exposes .NET code to Excel. The user code can be in text-based (.dna) script files (C#, Visual Basic or F#), or compiled .NET libraries (.dll). 13.2.2. Releases ExcelDna Version 0.20 Released: Sep 20 2009 Release Notes A minor bugfix update to ExcelDna. The following changes were made: * Fix COM reference leak in ExcelDnaUtil.Application (thanks to Suraj Gupta for reporting). * Updated function and command constants from 2007 xlcall.h file. * Made minor changes to ease VS 2005 conversion. * Increased max. exported functions to 1000.

14. CONCLUSION
Architecture matters. Choosing the right structure for an application, especially one distributed across several systems, is critically important. Bad architectural choices usually can't be fixed
43

NIT KURUKSHETRA

WINDOWS DNA

during implementation, no matter how good the developers are. Making the wrong decisions leads to lower performance, less security, and fewer options when an application needs to be updated. Windows DNA offers a solid foundation for N-tier applications, and Windows developers can build on what they know from the DNA world, applying much of it to the new .NET environment. Yet being aware of the changes suggested in this article will help you in creating faster, more secure, and more functional applications. For both N-tier applications and applications that exploit the new technologies of Web services, .NET has a great deal to offer. In this document, we have shown how the Microsoft Windows platform and other Microsoft technologies are effectively used to build a scalable, available, secure, and manageable site infrastructure. We have stressed keeping operations and application design of a large site simple and flexible, and have emphasized how a dot-com can successfully deploy and operate a site based on the four goals of the architecture: Linear scalability: continuous growth to meet user demand and business complexity. Continuous service availability: using redundancy and functional specialization to contain faults. Security of data and infrastructure: protecting data and infrastructure from malicious attacks or theft. Ease and completeness of management: ensuring that operations can match growth. We anticipate that this will be the first of many documents covering the breadth and depth of designing, developing, deploying, and operating great business Web sites that use Microsoft's products and technologies.

REFERENCES:
1.For miscellaneous search www.wikipedia.com
44

NIT KURUKSHETRA

WINDOWS DNA

2. Latest books on windows DNA http://www.brainbell.com/tutors/XML/XML_Book_B/Extending_the_Wi ndows_DNA_Model.htmL 3. Architecture of windows DNA http://www.vbdotnetheaven.com/UploadFile/mahesh/WindowsDNA0425 2005015744AM/WindowsDNA.aspx 4.Online books http://my.safaribooksonline.com/0130225576/ch20 5.Microsoft official website http://www.microsoft.com/en/us/default.aspx

45

You might also like